How can a weekly lesson planner benefit music students. What elements should be included in an effective practice planner. Why is structured practice important for musical development. How can parents and teachers use a practice planner to support students’ progress.
The Importance of Structured Practice for Musicians
Structured practice is a cornerstone of musical development. It allows students to make consistent progress, set achievable goals, and track their improvement over time. A well-designed practice planner can be an invaluable tool in this process, helping musicians of all levels organize their time and focus their efforts effectively.
Why is structured practice so crucial? It helps students:
- Develop discipline and good practice habits
- Break down complex skills into manageable tasks
- Maintain motivation by seeing tangible progress
- Prepare more effectively for lessons and performances
- Identify areas that need more attention
By using a practice planner, students can transform their musical journey from a series of haphazard practice sessions into a structured, goal-oriented process.
Key Features of an Effective Musician’s Practice Planner
What should be included in a comprehensive practice planner for musicians? An effective planner typically incorporates the following elements:
- Weekly schedule with dedicated practice time slots
- Space for lesson notes and teacher feedback
- Goal-setting sections for short-term and long-term objectives
- Detailed practice log for each session
- Progress tracking tools
- Repertoire list
- Technical exercises and scale practice charts
These components work together to create a holistic approach to practice, ensuring that students address all aspects of their musical development.
Weekly Schedule and Practice Time Slots
A well-structured weekly schedule is the backbone of any practice planner. It helps students visualize their available practice time and commit to regular sessions. How can students make the most of their schedule?
- Identify consistent daily practice times
- Balance practice with other commitments
- Include warm-up and cool-down periods
- Allow for both focused practice and exploratory play
Lesson Notes and Teacher Feedback
Incorporating lesson notes and teacher feedback into the practice planner ensures that students can easily reference important points from their lessons. This section might include:
- Key concepts discussed during lessons
- Specific areas for improvement
- New techniques or pieces introduced
- Homework assignments from the teacher
Setting and Tracking Musical Goals
Goal-setting is a crucial aspect of effective practice. How can students set meaningful goals that drive their progress? Consider the following approaches:
- Use SMART criteria: Specific, Measurable, Achievable, Relevant, Time-bound
- Break down larger goals into smaller, manageable steps
- Include both technical and artistic objectives
- Regularly review and adjust goals as needed
A well-designed practice planner should provide space for both short-term (weekly or monthly) and long-term (quarterly or yearly) goals. This allows students to maintain focus on immediate improvements while working towards broader musical aspirations.
Progress Tracking Tools
Tracking progress is essential for maintaining motivation and identifying areas that need more attention. Effective progress tracking tools might include:
- Practice time log
- Skill proficiency ratings
- Performance benchmarks
- Repertoire mastery checklist
By regularly updating these tools, students can visualize their improvement over time and celebrate their achievements.
Tailoring Practice Plans for Different Instruments and Skill Levels
While the core elements of a practice planner remain consistent, the specific content should be tailored to suit different instruments and skill levels. How can practice planners be customized for various musical disciplines?
String Instruments (Violin, Viola, Cello, Double Bass)
String players might focus on:
- Bow technique exercises
- Intonation practice
- Vibrato development
- Shifting and position work
Wind Instruments (Flute, Clarinet, Saxophone, Trumpet)
Wind players could emphasize:
- Breath control exercises
- Embouchure development
- Articulation studies
- Extended techniques
Piano and Keyboard Instruments
Pianists might focus on:
- Finger dexterity exercises
- Chord progressions and voicings
- Pedal technique
- Sight-reading practice
By tailoring the practice planner to the specific needs of each instrument, students can ensure they’re addressing the unique challenges and requirements of their chosen discipline.
Incorporating Technology into Practice Planning
In today’s digital age, technology can play a significant role in enhancing practice planning and execution. How can musicians leverage technology to support their practice routines?
- Digital metronomes and tuners
- Recording and playback tools for self-assessment
- Music notation software for composition and arrangement practice
- Online resources for ear training and music theory
- Virtual practice rooms and collaborative platforms
While traditional paper planners remain popular, digital practice planners offer unique advantages such as:
- Automatic tracking of practice time and progress
- Integration with scheduling apps and calendars
- Easy sharing of progress with teachers and peers
- Access to a vast library of practice resources and exercises
By incorporating technology thoughtfully, students can enhance their practice experience and gain valuable insights into their musical development.
The Role of Parents and Teachers in Practice Planning
Effective practice planning isn’t solely the responsibility of the student. Parents and teachers play crucial roles in supporting and guiding the practice process. How can these key figures contribute to successful practice planning?
Parents’ Role in Supporting Practice
Parents can support their child’s musical journey by:
- Creating a conducive practice environment at home
- Helping to establish and maintain a regular practice schedule
- Showing interest in the child’s progress and celebrating achievements
- Communicating regularly with the music teacher
- Encouraging consistent use of the practice planner
Teachers’ Role in Guiding Practice
Music teachers can enhance their students’ practice planning by:
- Providing clear, actionable feedback during lessons
- Helping students set realistic and challenging goals
- Teaching effective practice techniques and strategies
- Regularly reviewing and discussing the practice planner
- Adapting lesson content based on the student’s progress and goals
By working together, parents, teachers, and students can create a supportive ecosystem that fosters musical growth and development.
Overcoming Common Practice Challenges
Even with a well-designed practice planner, students may encounter obstacles in their musical journey. How can musicians address common practice challenges?
Maintaining Motivation
To stay motivated, students can:
- Set and celebrate small, achievable goals
- Vary practice routines to avoid monotony
- Engage in ensemble playing or group lessons
- Attend concerts and listen to inspiring performances
- Use the practice planner to visualize progress over time
Managing Time Constraints
For busy students, effective time management is crucial. Strategies include:
- Breaking practice into shorter, focused sessions
- Prioritizing essential practice elements
- Using “mental practice” techniques during downtime
- Integrating practice into daily routines (e.g., warm-ups while getting ready)
Dealing with Plateaus
When progress seems to stall, students can:
- Revisit and adjust goals in their practice planner
- Seek feedback from teachers or peers
- Explore new repertoire or techniques
- Focus on different aspects of musicianship (e.g., theory, ear training)
By addressing these challenges proactively, students can maintain steady progress and enjoyment in their musical studies.
Evaluating the Effectiveness of Your Practice Planner
Regular evaluation of the practice planner’s effectiveness is crucial for ongoing musical development. How can students and teachers assess whether a practice planner is serving its purpose?
- Review progress towards set goals
- Analyze practice logs for consistency and focus
- Assess improvement in specific skills or repertoire
- Gather feedback from teachers and peers
- Reflect on overall musical growth and enjoyment
If the current practice planner isn’t yielding the desired results, consider making adjustments such as:
- Refining goal-setting processes
- Modifying practice session structures
- Incorporating new tracking methods or tools
- Seeking additional guidance from teachers or mentors
Remember, an effective practice planner should evolve alongside the musician’s skills and needs. Regular evaluation and adjustment ensure that the planner remains a valuable tool throughout the musical journey.
PracticePlanners | Home
In the fields of psychiatry and psychology, the call for evidence and accountability is being increasingly sounded. It is a call answered by the use of evidence-based practice (EBP).
EBP is defined by the American Psychological Association (APA) as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” and is swiftly becoming the standard of care in mental healthcare (APA Presidential Task Force on Evidence-Based Practice, 2006). In fact, EBP is becoming mandated in some practice settings.
EBP is endorsed by many professional organizations, including the APA, National Association of Social Workers, and the American Psychiatric Association, as well as consumer organizations such as the National Alliance for the Mentally Ill (NAMI).
To further the implementation of EBP, the Treatment Planners clearly identify EBPs with this symbol: . These EBPs are psychological treatments that have the best available supporting evidence and these treatments are reflected in Objectives and Interventions that are marked with the EBT symbol. While different sources use their own criteria for judging levels of empirical support for any given treatment, PracticePlanners favor those that use more rigorous criteria, typically requiring demonstration of efficacy through randomized controlled trials or clinical replication series, good experimental design, and independent replication. This approach was to evaluate these various sources and include those treatments supported by the highest level of evidence and for which there was consensus in conclusions and recommendations.
Further, as most practitioners know, research has shown that the individual psychologist (e.g., Wampold, 2001), the treatment relationship (e.g., Norcross, 2002), and the patient (e.g., Bohart & Tallman, 1999) are also vital contributors to the success of psychotherapy. As noted by the APA, “Comprehensive evidence-based practice will consider all of these determinants and their optimal combinations.” (APA, 2006, p. 275).
The PracticePlanners’ approach does just that. Drawing upon years of clinical experience and the best available research, the PracticePlanners series, consisting of Treatment Planners, Progress Notes Planners, and Homework Planners, promotes effective, creative treatment planning through its variety of treatment choices.
Each Treatment Planner includes more than 1,000 clear statements describing the behavioral manifestations of each relational problem, long-term goals, short-term objectives, and clinically tested treatment options. These can be combined in thousands of permutations to develop detailed treatment plans, and relying on their own good judgment and in collaboration with the client, clinicians can easily select the statements that are appropriate for the individuals whom they are treating—a process that will ultimately benefit the client, clinician, and mental health community.
Debate does exist among practitioners skeptical about changing their practice on the basis of research evidence, and their reluctance is fueled by the methodological challenges and problems inherent in psychotherapy research. The PracticePlanners series accommodates those practitioners by providing a range of treatment plan options. Some treatment options are supported by the evidence-based value of “best available research” (APA, 2006), others reflect the common clinical practices of experienced clinicians, and still others represent emerging approaches. In this way, PracticePlanners allows every clinician to construct what he or she believes to be the best treatment plan for their client’s particular needs.
Musician’s Practice Planner – A Weekly Lesson Planner for Music Students
Series:
General Music
Publisher: Molto Music
Format: Softcover
Author:
Various Authors
Proven successful in private lessons and in the classroom, this planner is a must-have for all musicians. Teachers can use it to set goals and assignments, and students can monitor their progress, time and efficiency.
$10.99
(US)
Inventory #HL 00311358
ISBN: 9780967401201
UPC: 884088103538
Width: 8.5″
Length: 11.0″
80 pages
Prices and availability subject to change without notice.
Hal Leonard Musician’s Practice Planner-A Weekly Lesson Planner For Music Students Book
Hal Leonard Musician’s Practice Planner-A Weekly Lesson Planner For Music Students Book – Woodwind & Brasswind
{
“siteName” : “/wwbw”,
“mobileAppSrcCode” : “”,
“mobileAppItunesBanner” : “true”,
“enableClarip”: false,
“claripConsentJSUrl”: “https://qa-gci.clarip.com/universal_consent/clarip_consent.js”,
“claripDomain”: “https://qa-gci.clarip.com/”,
“sourceCodeId” : “133703933”,
“sourceName” : “DIRECTSOURCECODEWB”,
“sourceSegment” : “direct”,
“profileZipcode” : “”,
“jsonLdEnabled” : “true”,
“profileStoreId” : “”,
“onlineOnlyLessons” : “”,
“defaultLessonsStoreId” : “”,
“profileStoreName” : “”,
“contextPath” : “”,
“imageResizeEnabled” : “true”,
“unicaEnv” : “site-devint”,
“staticContentUrl” : “https://static.wwbw.com”,
“styleStaticContentUrl” : “https://static.wwbw.com”,
“catalogAssetStaticContentUrl” : “https://static.wwbw.com”,
“scene7StaticContentUrl” : “https://media.wwbw.com/is/image/”,
“scene7BasePath” : “MMGS7/”,
“staticVersion” : “ecmd-2021.5.1-0d&cb=2”,
“versionParam” : “?vId=ecmd-2021.5.1-0d&cb=2”,
“customerService” : “800.348.5003”,
“profileID” : “8750382063”,
“contentKey”: “site8prod906446”,
“isInternational”: “false”,
“isWarrantyShippable”: “true”,
“isInternationalCommerceEnabled”: “true”,
“currencySymbol”: “$”,
“profileCountryCode”: “US”,
“profileCurrencyCode”: “USD”,
“audioEyeEnabled” : “true”,
“applePayEnabled”:”false”,
“oLOnExitNumber”: “0”,
“liveChat” : “false”,
“cookieLoggedIn” : false,
“richRelevanceMode”:”render”,
“richRelevanceApiKey”:”f29fd1fb3de71d59″,
“richRelevanceUserId”:””,
“richRelevanceSessionId”:”bc9c4bbfe3aa61b4a73b0b7fb9f441b2″,
“rrBaseUrl”:”//recs.richrelevance.com/rrserver/”,
“rrChannelId”:”3566″,
“hashedUserIdForCriteo”:””,
“rrTimeout”:”10000″,
“isEducatorAccount”: “false”,
“sessionIsDC”: “false”,
“fullyLoggedIn” : false,
“welcomeMat” : “false”,
“powerReviewsUrl” : “https://static.wwbw.com/”,
“deviceType” : “d”,
“prodEnvEnabled” : false,
“isMobile”:”false”,
“madMobileEnabled” : false,
“rrLoadAtgRecs”:”false”,
“janrainAppDomain”:”https://login.wwbw.com”,
“janrainAppId”:”pihcdbgihgchgofbmdag”,
“janrainAppName”:”login.wwbw.com”,
“endecaCookieSortEnabled”:”false”,
“enableInstoreOnlyAddToCart”:”false”,
“JSESSIONID”:””,
“isHum” : “true”,
“showEloyalty”: “true”
,
“loyaltyName”:”firstchairrewards”,
“showLoyalty”:”true”,
“loyaltyUser”:””,
“loyaltyPoints”:””,
“showCheckoutLoyalty”:”true”
,
“fortivaCardName”:”Forte Card”
}
site8sku906446000000000
site8prod906446
906446
site8sku906446000000000
Hal Leonard Musician’s Practice Planner-A Weekly Lesson Planner For Music Students Book
Skip to main content
Skip to footer
Special Offer | The Private Practice Planner
Dear Friend,
Is it a struggle to provide high-quality therapy in your current setting?
Does it seem like you’re hitting a wall? Something holding you back, but you just can’t put your finger on what it is?
Do you ever worry that it’s become too difficult to work in your current setting? That there are too many limitations, long-hours and not enough respect or income?
I know what you’re going through.
Or at least I used to.
Whether you’re already burned out or heading in that direction, let’s talk about it.
The honest truth is…
it’s not your fault.
Now that might sound cliche, but it’s true…and here’s why. You probably got into the field because you wanted to help people. And yes, you still LOVE helping people… but you’re starting to feel somewhat taken advantage of.
You didn’t get into this field for the money… but it sure would be nice to make more money, especially given the long-hours, report writing at home, having to buy your own therapy materials, etc.
The problem is, you’re a “helping people” person. (I am too.) And sometimes
we get so caught up in helping others, that we forget to help ourselves.
So now, here you are.
Still feeling like being a speech-language pathologist, OT, PT, etc. is the right career for you… but are you in the right setting?
It’s okay.
I know you’re struggling to figure out how to provide the best therapy for your clients/patients/kiddos so that you can continue working in the profession that you love.
This is what I do. Help is on the way!
You could go and try to figure out what the first steps of starting a private practice are, spend HOURS combing Google or get overwhelmed on the ASHA website or lost in Facebook Groups
Or…
Simply use the 5 tools in the
Private Practice Planner Pack to start the process of planning and envisioning what your private practice could look like over the course of an evening as you sit back, relax and sip your beverage of choice.
No fluff and no B.S.
Just answer the questions and start putting your private practice dreams to paper before you “jump into” anything.
Here’s just a LITTLE of what you’ll get:
-
Specific Questions to Answer That Will Lead You to Formulate a Plan
-
A Better Understanding of How / When Private Practice Will Fit Into Your Life
-
Insight Into Your Specific Motivation(s) for Being in Private Practice
-
Information About Your Earnings Potential (Using The Annual Income Salary Estimator)
-
A Plan for Which Types of Clients You Want to See and What Treatment You’ll Provide
-
A Curated List of Business Books to Check Out to Gain Some Business Skils Without Getting an MBA
-
Conduct a Market Analysis to Make Sure There’s a Need In Your Local Area
-
A Private Practice Roadmap to Learn The 5 Phases of Private Practice and 3 Big Mistakes to Avoid
This Was a Fun System to Develop
(And People Love Using It!)
I get emails and Facebook messages every single day asking, “How do you start?” or “What’s the first step?” to start a private practice.
Honestly,
the first step is to make sure it’s the right decision for you.
Once you’ve decided that private practice is right for you, there are a lot more steps
But
you have to start with a vision, a plan and some idea of how much money you stand to make sure it’s a good financial decision for you…
I have been helping private practitioners start their own private practices since 2008 and I know a thing or two about how to make the process easier and more efficient.
Included Item #1 : The 1 Page Business Plan
MoltoMusic Musician’s Practice Planner for Teaching Aid
Woodwind Instrumentation Codes
Following many of the titles in our Wind Ensemble catalog, you will see a set of numbers enclosed in square brackets, as in this example:
Description | Price |
---|---|
Rimsky-Korsakov Quintet in Bb [1011-1 w/piano] Item: 26746 | $28.75 |
The bracketed numbers tell you the precise instrumentation of the ensemble. The first number stands for Flute, the second for Oboe, the third for Clarinet, the fourth for Bassoon, and the fifth (separated from the woodwinds by a dash) is for Horn. Any additional instruments (Piano in this example) are indicated by “w/” (meaning “with”) or by using a plus sign.
Flute Oboe Clarinet Bassoon — Horn
This woodwind quartet is for 1 Flute, no Oboe, 1 Clarinet, 1 Bassoon, 1 Horn and Piano.
Sometimes there are instruments in the ensemble other than those shown above. These are linked to their respective principal instruments with either a “d” if the same player doubles the instrument, or a “+” if an extra player is required. Whenever this occurs, we will separate the first four digits with commas for clarity. Thus a double reed quartet of 2 oboes, english horn and bassoon will look like this:
0,2+1,0,1-0
Note the “2+1” portion means “2 oboes plus english horn”
Titles with no bracketed numbers are assumed to use “Standard Instrumentation.” The following is considered to be Standard Instrumentation:
- Duo – Flute & Clarinet – or [1010-0]
- Trio – Flute, Oboe & Clarinet – or [1110-0]
- Quartet – Flute, Oboe, Clarinet & Bassoon – or [1111-0]
- Quintet – Flute, Oboe, Clarinet, Bassoon & Horn – [or 1111-1]
Brass Instrumentation Codes
Following many of the titles in our Brass Ensemble catalog, you will see a set of five numbers enclosed in square brackets, as in this example:
Description | Price |
---|---|
Copland Fanfare for the Common Man [343.01 w/tympani] Item: 02158 | $14.95 |
The bracketed numbers tell you how many of each instrument are in the ensemble. The first number stands for Trumpet, the second for Horn, the third for Trombone, the fourth (separated from the first three by a dot) for Euphonium and the fifth for Tuba. Any additional instruments (Tympani in this example) are indicated by a “w/” (meaning “with”) or by using a plus sign.
Trumpet Horn Trombone . Euphonium Tuba
Thus, the Copland Fanfare shown above is for 3 Trumpets, 4 Horns, 3 Trombones, no Euphonium, 1 Tuba and Tympani. There is no separate number for Bass Trombone, but it can generally be assumed that if there are multiple Trombone parts, the lowest part can/should be performed on Bass Trombone.
Titles listed in our catalog without bracketed numbers are assumed to use “Standard Instrumentation.” The following is considered to be Standard Instrumentation:
- Brass Duo – Trumpet & Trombone, or [101.00]
- Brass Trio – Trumpet, Horn & Trombone, or [111.00]
- Brass Quartet – 2 Trumpets, Horn & Trombone, or [211.00]
- Brass Quintet – 2 Trumpets, Horn, Trombone & Tuba, or [211.01]
- Brass Sextet and greater – No Standard Instrumentaion
People often ask us about “PJBE” or “Philip Jones” instrumentation. This is a special instrumentation adopted and perfected by the Philip Jones Brass Ensemble. It consists of the forces 414.01, and often includes Percussion and/or Tympani. In addition, there are often doublings in the Trumpet section
– Piccolo and Flugelhorn being the most common. While this instrumentation has come to be common, it is still not “Standard” as many Brass Dectets use very different forces, most often with more Horns than PJBE.
String Instrumentation Codes
Following many of the titles in our String Ensemble catalog, you will see a set of four numbers enclosed in square brackets, as in this example:
Description | Price |
---|---|
Atwell Vance’s Dance [0220] Item: 32599 | $8.95 |
These numbers tell you how many of each instrument are in the ensemble. The first number stands for Violin, the second for Viola, the third for Cello, and the fourth for Double Bass. Thus, this string quartet is for 2 Violas and 2 Cellos, rather than the usual 2110.
Titles with no bracketed numbers are assumed to use “Standard Instrumentation.” The following is considered to be Standard Instrumentation:
- String Duo – Viola & Viola – [1100]
- String Trio – Violin, Viola, Cello – [1110]
- String Quartet – 2 Violins, Viola, Cello – [2110]
- String Quintet – 2 Violins, Viola, Cello, Bass – [2111]
Orchestra & Band Instrumentation Codes
Following some titles in our Orchestra & Band catalogs, you will see a numeric code enclosed in square brackets, as in these examples:
Order Qty | Description | Price | |
---|---|---|---|
Beethoven Symphony No 1 in C, op 21 [2,2,2,2-2,2,0,0, tymp, 44322] | $150.00 | ||
Jones Wind Band Overture [2+1,1,3+ac+bc,2,SATB-2+2,4,3+1,1, tymp, percussion, double bass] | $85.00 | ||
MacKenzie Hines Pond Fantasy (DePaolo) [2d1+1,1,2+1,1-2,2(+2),3,0, perc, tymp, 44322, Eb clarinet, SAATB saxes, trombone solo] | $75.00 |
The bracketed numbers tell you the precise instrumentation of the ensemble. The system used above is standard in the orchestra music field. The first set of numbers (before the dash) represent the Woodwinds. The set of numbers after the dash represent the Brass. Percussion is abbreviated following the brass. Strings are represented with a series of five digits representing the quantity of each part (first violin, second violin, viola, cello, bass). Other Required and Solo parts follow the strings:
Woodwinds—Brass, Percussion, Strings, Other
Principal auxilary instruments (piccolo, english horn, bass clarinet, contrabassoon, wagner tuba, cornet & euphonium) are linked to their respective instruments with either a “d” if the same player doubles the auxiliary instrument, or a “+” if an extra player is required. Instruments shown in parenthesis are optional and may be omitted.
Example 1 – Beethoven:
[2,2,2,2-2,2,0,0, tymp, 44322]
The Beethoven example is typical of much Classical and early Romantic fare. In this case, the winds are all doubled (2 flutes, 2 oboes, 2 clarinets and 2 bassoons), and there are two each horns and trumpets. There is no low brass. There is tympani. Strings are a standard 44322 configuration (4 first violin, 4 second violin, 3 viola, 2 cello, 2 bass). Sometimes strings are simply listed as “str,” which means 44322 strings.
Example 2 – Jones: (concert band/wind ensemble example)
[2+1,1,3+ac+bc,2,SAATB-2+2,4,3+1,1, tymp, percussion, double bass]
The second example is common for a concert band or wind ensemble piece. This ficticious work is for 2 flutes (plus piccolo), 1 oboe, 3 clarinets plus alto and bass clarinets, 2 bassoons, 5 saxes (soprano, 2 altos, tenor & bari), 2 trumpets (plus 2 cornets), 3 trombones, euphonium, tuba, tympani, percussion and double bass. Note the inclusion of the saxes after bassoon for this band work. Note also that the separate euphonium part is attached to trombone with a plus sign. For orchestral music, saxes are at the end (see Saxophones below. It is highly typical of band sets to have multiple copies of parts, especially flute, clarinet, sax, trumpet, trombone & percussion. Multiples, if any, are not shown in this system. The numbers represent only distinct parts, not the number of copies of a part.
Example 3 – MacKenzie: (a fictional work, by the way).
[2d1+1,1,2+1,1-2,2(+2),3,0, perc, tymp, 66432, Eb clarinet, SAATB saxes, trombone solo]
In the third example, we have a rather extreme use of the system. It is an orchestral work for piccolo, 2 flutes (1 of whom doubles on piccolo), 1 oboe, 2 clarinets plus an additional bass clarinet, 1 bassoon, 2 horns, 2 trumpets (plus an optional 2 cornets), 3 trombones, no tuba, percussion, tympani, 6 first violins, 6 second violins, 4 violas, 3 cellos, 2 double basses, Eb clarinet (as an additional chair, not doubled), 5 saxes (soprano, 2 alto, tenor & baritone) & a trombone soloist.
Note: This system lists Horn before Trumpet. This is standard orchestral nomenclature. Unless otherwise noted, we will use this system for both orchestra and band works (in most band scores, Trumpet precedes Horn, and sometimes Oboe & Bassoon follow Clarinet). Also, it should be noted that Euphonium can be doubled by either Trombone or Tuba. Typically, orchestra scores have the tuba linked to euphonium, but it does happen where Trombone is the principal instead.
Saxophones, when included in orchestral music (they rarely are) will be shown in the “other instrument” location after strings and before the soloist, if any. However for band music, they are commonly present and therefore will be indicated after bassoon as something similar to “SAATB” where S=soprano, A=alto, T=tenor and B=baritone. Letters that are duplicated (as in A in this example) indicate multiple parts.
And finally, here is one more way to visualize the above code sequence:
- Flute (doubles or with additional Piccolo)
- Oboe (doubles or with additional English Horn)
- Clarinet (doubles or with additional Bass Clarinet)
- Bassoon (doubles or with additional Contrabassoon)
- Saxophones (band music only, showing SATB voicing)
- – (dash)
- Horn (doubles or with additional Wagner Tuba)
- Trumpet (doubles or with additional Cornet)
- Trombone (doubles or with additional Euphonium)
- Tuba (doubles or with additional Euphonium)
- Percussion
- Tympani
- Strings (1st & 2nd Violin, Viola, Cello, Bass)
- Other Required Parts
- Soloist(s)
House of Doolittle (HOD28102) 8-Person Daily Group Practice Planner 8 x 11
BPA Free
Dolphin Blue’s products are healthy for you and the planet. This item is made with plastics that are free of the chemical BPA. BPA is an endocrine disruptor and has been linked to health problems like cancer, physiological dysfunction, heart disease, and developmental defects in children.
Carbon Neutral Plus
Dolphin Blue’s products are made in the most energy efficient and responsible methods. Products made Carbon Neutral Plus are created in ways that reduce carbon emissions and conserve Earth’s natural resources and habitats. These products have reduced carbon footprints through investments in energy conservation and renewable energy resources.
Dishwasher Safe
Dolphin Blue’s products are healthy for you and the planet. This item is dishwasher safe and will not leech harmful chemicals into your household and environment when exposed to high heat.
FSC Certified
Dolphin Blue’s products are made with responsibly harvested and managed resources. Forest Stewardship Council (FSC) certified products adhere to quality standards that mandate responsible forestry and social awareness in procuring materials for production. Buying products made through conscious production choices creates an ethical relationship between the producers, resources, and consumers of a product.
Green Seal Certified
Dolphin Blue’s products safeguard the environment. Green Seal certified products contain a host of environmental qualities, like containing a minimum of 30% post-consumer recycled content, being processed without chlorine, and packed in materials with minimal chemical levels. All Green Seal products are certified and monitored to assure their sustainable attributes.
Green-e Renewable Energy
Dolphin Blue’s products are made in the most energy efficient and responsible methods. Products made Carbon Neutral Plus are created in ways that reduce carbon emissions and conserve Earth’s natural resources and habitats. These products have reduced carbon footprints through investments in energy conservation and renewable energy resources.
Made In USA
Dolphin Blue’s products are only made in the United States. Buying American made products is important in fostering strong local economies. Buying local provides jobs for your neighbors and creates prosperity for your community. Sourcing only American made items significantly reduces the environmental footprint of our products, saving great amounts of fuel and resources otherwise used to transport foreign made products from overseas.
Melamine Free
Dolphin Blue’s products are healthy for you and the planet. This item is free of the chemical Melamine. Consumption of melamine has been linked to cancer and reproductive dysfunction.
No Animal Testing
Dolphin Blue values the planet and all of its inhabitants. We ensure that this product was not tested on animals and did not involve the cruel or inhuman treatment of our furry (or not so furry) friends.
Post Consumer Recycled Material
Dolphin Blue provides only products containing at least 20% post-consumer recycled (PCR) content, with most containing 100%. Post-consumer recycled material facilitates environmentally responsible use of our resources. PCR products eliminate the need to harvest virgin material, lessen the waste produced in manufacturing new products, decrease landfill destined waste and establish sustainable consumption of our goods.
Processed Chlorine Free
Dolphin Blue provides only the most environmentally conscious products. Processed Chlorine Free means this item is whitened without chlorine or chlorine-containing compounds. Buying PCF products helps prevent chlorine produced dioxins and toxins from entering our ecosystem, which deteriorate the environment and pose health risks to our immune and reproductive systems.
Recycled
This item contains recycled materials.
Remanufactured
Dolphin Blue believes in re-use as a means to a healthy, happy planet. This product is made of remanufactured materials, saving significant amounts of resources, energy expenditure, and waste production otherwise used and created when making an originally manufactured product. It’s also guaranteed to work!
Soy Ink
Dolphin Blue prints responsibly. Printing with soy based ink reduces the creation of volatile organic compounds that diminish our air quality. Soy oil, used in making soy printing ink, is a renewable resource and decreases extraction and use of environmentally harmful petrochemicals.
Practice Organization: the 3 Tools You Need
Overwhelmed much? This year has been a doozy but I hope you’re sliding into the final stretch with a good handle on your business, your self-care strategies in tact, and beginning to plan for next year with optimism and joy. My ticket to managing 2020’s overwhelm has been staying on top of my practice organization and using my organization tools with consistency. My top 3 faves are my planner, my to do list, and my ideas and inspiration notebook. Mine are mostly paper, but you can augment with your favorite software applications, and I’ll show you how to integrate all of this for a well-oiled organization machine.
Planner
My planner is my externalized working memory. If something needs to happen, it goes in the planner. Like many of you, my life (personal and professional) is just too saturated right now to hold everything in my brain. I use a paper planner after too many years of various software programs not playing well together–it only took a couple mess ups with my time for me to realize that “one ring to rule them all” is the mantra for my planner, and for me, that’s a spiral bound paper planner with plenty of room for jotting appointments, notes, and goals for the week. However, software integration has gotten much better and if you have a digital planner that works great for you and communicates with your practice management software, go for it. All of my appointments go in my planner, and I also set personal goals in the margins (like “exercise 4 times this week” or “work on my book at least 90 minutes this week”).
My process for client appointments has become routinized and accounts for both paper and planner. When I set an appointment, I do so by looking at my paper planner (the one ring strategy). My “finish an appointment” routine is to finish my note, lock the note, charge for the appointment if applicable, and then enter the new appointment into my practice software. That way clients will get reminder emails or texts, if they’ve opted into those. By making the “wrap up” portion of my appointments as routinized as possible, these appointments almost always make it to my paper and practice calendars. To err on the side of caution, one of my “wrap up the week” activities is to reconcile my paper planner to my practice planner for the following week and make sure all appointments are entered and accurate.
In the market for a paper planner? What you choose will depend on how you plan to use it. For many years, I used a very small paper planner (4 x 5,5″ or so) that fit easily in my purse. I found over time that I preferred a little extra space to jot notes and set weekly goals. My planner for 2020 is about 5 x 7″ and is a great size to allow for extra planning space. For 2021, I purchased a gorgeous but significantly larger (about 8 x 10″) Erin Condren spiral planner with a hard laminated cover, internal pockets, and additional integrated planning pages for each month. I’m super excited to start using it! Given that I don’t see a lot of mobility in my worklife this year with ongoing pandemic issues and telehealth, I’m not as concerned about a larger planner being bulky to tote back and forth from work.
Is your to do list a series of scraps of paper: post it notes, random index cards, and backs of receipts? Mine was, too. It works in a pinch but makes it really hard to track (half the time I can’t even decipher what my note says or references, and I have no idea how recent the stacks of to do notes are that I’d find on my desk). I love a good to do list. I know some of you are cringing at that, but hear me out. Getting action items on paper and breaking down long term goals into measurable steps is what allows me to see and plan out progress.
Some planners have built in to do lists, which are great, but may offer a limited amount of space. I just purchased a legal size printed “to do” paper pad. Date goes along the top and then there are 31 lines with check boxes to plan out as many things as you’d like to accomplish. You could use a to do planner like this as a daily item, or you could plan out steps for a long term goal and set due dates for each item. Keeping this on a bound planner also allows you to reference earlier items completed and track your progress.
If you prefer a digital to do list, I would suggest trying out trello. I use trello to track just about everything I do in my work life. A super simple “to do” board in trello could have lists for to do, doing, and done. Items go on cards, and cards move through the lists as you make progress. The beauty of a trello card is that it is infinitely deep with space–you can upload files, link to webpages, add photos, etc–anything you need to get a task done. My paper list is a much reduced and more actionable list than my trello “weekly to do” list, and includes both personal and work items, though there’s no reason you couldn’t do all of this in trello.
Want more trello but not sure where to start? Check out my course which comes with my five top practice management boards included!
You’re welcome to call this whatever you like, but I think it is essential for a creative entrepreneur to keep a notebook handy where ideas and inspiration go to live and be referenced. Yes, you can jot ideas in a planner, but make sure they move into something more permanent, where you’ll know to find them and nurture them. Creative people often have so many ideas but are limited in terms of time for implementation. Harnessing these ideas and designating a place for them gives space for that energy to grow when you are ready to return. I tend to purchase a new notebook when school supplies are on sale each year (late summer-ish). In this notebook I put brand new ideas, more involved planning for current ideas and projects, and notes from courses I take, articles I read, and my own business coaching. While I prefer a regular size, wide ruled notebook with a spiral binding and a heavier laminated cover, I’ve used smaller “journal” style notebooks with a glued binding in the past. Find something that works for you and that you love so you’ll be inspired to revisit it often.
Could you do this digitally? Of course. A trello board for ideas and inspiration would be a great thing to make. You could create a list for each idea, then drop cards on it as you flush out the idea. Or you could create an ideas and inspo board and then lists by the year, with cards for each idea and then activity within the card as you flush them out. A spreadsheet or doc would also work, just make sure it’s something you revisit frequently and don’t forget how to access (sheepishly looking away as I realize how many docs I’ve created and lost before I discovered trello and realized I could link to my docs and spreadsheets…).
I get almost as excited about my planner as I do about my writing utensils. For me, that’s a mechanical pencil for my planner so that I can easily erase, but I also like the pilot frixion pens (eraseable, and they really do erase). The pens are also great if you like to color code in your planner. I love me a good gel pen and have several that are like writing with butter. They always inspire me to write in my best penmanship (read: legibly). I’m not kidding. My husband once went grocery shoping with my list and had to call me to ask why I wrote “flamingos” in the produce section. Tomatoes, of course, tomatoes, who puts flamingos on tacos? Penmanship may not be my strong suit, but fortunately I make up for it in spades with my organization skills!
If you’re a paper and pen/pencil person, you might like this data: handwriting has been shown to lead to greater retention of material than typing the same material on a device. Think about it. The process of formulating a thought, moving that into words, sending motor signals to your fingers to produce the letters of those words, and visually monitoring your production for accuracy on paper is a little more complex than typing, so it makes sense that this would result in richer neural encoding.
What works for you to get organized and keep your practice running smoothly? What tools do you have lined up and ready to go for 2021? What gorgeous pen gets you excited to write? Get in on the discussion in the FaceBook group!
Here are some tools similar to the ones I use and describe in this article (affiliate links follow).
90,000 Planning Posts | Microsoft Docs
- Reading takes 2 minutes
In This Article
A key imperative for a successful planning implementation is to have the right people with the right skills in the right roles. This article explores the most important planning roles for any Field Service organization.
Dispatcher or Scheduler
The dispatcher or scheduler is responsible for ensuring that work orders (or requirements) are mapped to the correct resources in the order to create reservations within the specified time period. This role can be performed by employees in different positions, for example:
- Service Manager
- Customer Service Representative
- traditional dispatchers
A scheduler may be responsible for manually assigning all requirements; in the automatic scheduling of all requirements by the system and simple exception handling and something in between.
- Manual approach can be mouse drag and drop of requirements for resource time periods.
- A semi-automatic approach might be to use a scheduling assistant tool that recommends the available and most appropriate resources.
- A fully automatic approach could be to use the Resource Scheduling Optimization application, which automatically schedules optimal resource requirements based on predefined rules.Using more automated scheduling can result in fewer planners per resource.
In practice, there are usually people who perform or supervise the scheduling role, if only to help handle exceptions. Even with a high level of automation, schedulers are used to ensure that optimizations are tailored to business objectives.
Planning Analyst
The role of the planning analyst is emerging as organizations implement advanced planning features such as optimization, automated planning, and analysis.The ideal planning analyst has a healthy mix of planning, analytics, and optimization values.
This role is responsible for scheduling optimization along with associated scope, goals, and parameters.
PostgreSQL: Documentation: 9.5: 65.2. Scheduler Statistics and Security: Postgres Professional Company
65.2. Scheduler Statistics and Security
Table pg_statistic
is only restricted to superusers, so normal users cannot get information about the contents of other users’ tables from it.However, some of the selectivity estimation functions will use a custom operator (the operator appearing in the query, or linked) to analyze the stored statistics. For example, to determine if a stored most frequent value is valid, the selectivity evaluator should use the appropriate operator =
to compare the constant in the query with that stored value. Thus, data pg_statistic
can in principle be transferred to user operators.And a specially constructed operator can output the operands passed to it intentionally (for example, by writing them to the log or placing them in another table) or unintentionally (showing their values in error messages). In any case, this allows a user who does not have access to table pg_statistic
to see the data it contains.
To prevent this, all built-in selectivity estimation functions operate according to the following rules. For the stored statistics to be used when scheduling a query, the current user must either have the SELECT
privilege on the table or the columns involved, or the statement must have the LEAKPROOF
characteristic (more precisely, the function that implements the statement must have it).Otherwise, the selectivity assessment will be carried out as if there were no statistics at all, and the planner will continue to work with general or secondary assumptions.
If the user does not have the required access to the table or columns, then in many cases the query will end up with a denied access error, so this mechanism will not be visible in practice. But if the user is reading data from a view with a security barrier, the planner might want to check the statistics of the underlying table that are not directly available to the user.In this case, the operator must be sealed; otherwise, statistics will not be used. This will not have external manifestations, except that the query plan may not be optimal. If you suspect that you are experiencing this, try running the query as an extended user and check to see if a different query plan is selected.
This limitation only applies when the planner might need to execute a custom statement with one or more values from pg_statistic
.In this case, the planner is allowed to use general statistical information, for example, the percentage of NULL values or the number of distinct values in a column, regardless of access rights.
Selectivity estimation functions implemented in optional extensions that can access statistics by invoking custom operators must follow the same security rules. Refer to the PostgreSQL source code for practical guidance.
Task Scheduler
The first thing to consider before creating tasks is whether the “ Task Scheduler ” service is running.This service may have been disabled when configuring system services to free memory at a time when the use of the Task Scheduler was unnecessary. Now, if the scheduled tasks will be executed regularly, then this service should be switched to automatic startup mode. To make sure that the service is running, run Start – Run – services.msc . The window shown below will open:
Find the Task Scheduler service and make sure that in the Status field it is Running , and in the Startup Type field – Auto .If this is not the case, then double-click on the service name and in the window that opens, adjust the values to those indicated above (for this you need to have administrator privileges, i.e. your account must be from the Administrators group).
After the service is started and its startup type is adjusted to automatic, the service will start at system boot, and the tasks will be executed according to the schedule.
Now let’s create a task.
Open Control Panel and click on Scheduled Jobs .The following window will open:
To create a new task, click on Add task .
Click Next .
In this window you need to select the program that will be launched by the Scheduler . As a rule, the required program is not in this list and you need to find it using the button Browse . For example, I created a Scheduler folder on the C drive and placed the test.bat batch file into it, which contains a sequence of commands that must be run at a specific time.By clicking the button Browse find the file that will be launched by the Scheduler and click Open . If everything is done correctly, the new task creation wizard will display the following window:
In the Name field, you must specify the name of this task, which will be displayed in the Scheduler window. The name can be anything. Give the task such a name so that later, when you open Scheduler , you can immediately remember what this task is doing.In this example, I named the job “ Test Scheduler “.
Then you need to select the start period for this task. The following options are possible:
- Daily . The task will run daily, either on weekdays only, or in a few days at the specified time. All these parameters can be selected in the next window, which will be discussed below.
- Weekly . In the next window, you can specify whether you want to run the task every week and select the days of the week on which the task will be launched at a certain time.
- Monthly . Then you can specify in which months of the year the task should be run and select on which days of the month or on which days of the month at a certain time the task will be launched.
- Once . In the next window of the wizard, you can select the date and time for starting the task. This task will no longer be performed.
- When the computer boots up . There is no next window when choosing this option, which is logical, since with this type of startup, the task will be executed every time the computer boots.This type of launch does not require a user to log in, the task will be launched on behalf of the user that will need to be specified in the next window.
- When entering Windows . This startup type is similar to the previous one with the difference that the task will be executed only when the user logs into Windows, i.e. will enter his username and password.
Now let’s look at these types of job launches in more detail. After pressing the button Next , a window will be displayed in which you need to specify additional schedule parameters for starting the task.The exception is the last two types of startup, when the task is executed when the computer boots or when the user logs on. So, enter the name of the task and select one of the startup types, then click the Next button. Depending on the type of launch selected, certain settings for the task launch schedule will be offered.
Daily
With the daily run type, you can choose whether to run the task daily, or run it only on workdays (naturally, holidays are considered workdays during the week, the task runs from Monday to Friday, inclusive), or to run the task from periodicity, for example, once every three days.In the same window, you need to select the start time for the task. “ Start Date ” will allow you to postpone the first start of the task until a certain date, i.e. if today is the first day of the month, and “ Start Date ” is set to the 10th, then the task will start executing from the 10th, despite the fact that it is scheduled to run as daily.
Weekly
With the weekly launch type, it is possible to launch the task on certain days of the week by ticking the corresponding days of the week in the window shown above in the picture.You can specify that the task should be executed in a week, i.e., for example, the first week on Monday, Wednesday and Friday, the second week the task is not executed, in the third week it is executed on Monday, Wednesday, Friday, the fourth is not executed, etc. Also, you need to specify the time at which the task will be executed on the specified days of the week.
Monthly
If the task is started monthly, you must mark the months for which the task will run and specify the date when the task will be started.Please note that the last day of the month can be the 28th, 29th, 30th or 31st, therefore, if the task needs to be completed at the end of the month, then it is better to schedule its launch on the first day of the next month at 00: 01. If there is a need not to specify a specific number to start the task, but to run it only, for example, on the third Fridays of the specified months, then you can toggle the appropriate radio button and select the required values from the combo boxes.
Once
If you choose to execute the task once, you only need to specify the date and time of its launch.The task will be launched at the specified time, and after that it will no longer be launched. The job from the Scheduler will not be deleted, so it can be used in the future by correcting the start date and time. This type of task launch is well suited for non-periodic execution of tasks during the absence of the user at the computer.
When the computer boots
As mentioned above, with this type of startup, the task will be executed every time the computer boots, up to the user’s login.
When logging into Windows
This task will be executed when the user logs on.
Select the most suitable schedule for your frequency of launch requirements, even if it does not fully meet the required one. For example, if a task needs to be executed on weekdays at 21:00, and on weekends – at 19:00, then at this stage of creating the task, you should select the weekly task launch, and after creating it, adjust the schedule as needed. An example of such a setting is shown in the figure below.
After the initial schedule is set, click the Next button. An example of a window that opens is shown in the figure below.
In this window, you must enter the name and password under which the task will run. By default, the username is the same as the current username. Be careful when entering the password, as it is hidden by “asterisks”. If the password is very complex, then it is better to type it in any text editor (for example, Notepad ) and copy it into the appropriate fields.If an error is made when entering the password, no message about this will be displayed, but the task will not be executed. Also, keep in mind that Windows XP MUST NOT use a blank password, although the user account may have a blank password. An empty password will also cause the job to fail. To fix this in Windows XP, give the account a password and enter it in the task being created.
The scheduled task will run on behalf of the user that was entered.Those. it is possible from under a user with limited rights (group Users ) to run tasks on behalf of an account without limiting rights (group Administrators ). To do this, when creating a task, instead of the suggested name of the current user, enter the name of a user with administrative privileges (a user belonging to the Administrators group). This will lead to the fact that when executing a task, the program will be able to access those functions and files that a regular user does not have access to.
Another point that is easy to forget, and which will lead to the fact that the scheduled tasks will not be performed – changing the user’s password, the one that must be entered to log into Windows. If the password is changed, then you will have to change it for all assigned tasks, which is very inconvenient, but increases security.
After the username and password are entered, click Next .
If the launch schedule fully meets the requirements, then after clicking the Finish button, in the window shown in the figure below, the creation will be completed and the newly created task will appear in the window Scheduled Tasks in the Control Panel .If the task launch schedule requires further, more fine tuning, then select the item “ Set advanced parameters … ” and click the Finish button. In this case, a window will open for configuring additional options for the task launch schedule.
If you need to configure additional parameters of the scheduled task, then right-click on the created task in the folder Scheduled Tasks located in the Control Panel and select Properties .If the additional settings will change immediately after creating the task, then in the last window when creating the task (the figure in the previous chapter), you need to check the box “ Set additional parameters … ” and click Done . Any of these sequence of actions will lead to the fact that such a window will open.
This window contains three tabs with parameters for fine-tuning the task launch schedule. All settings located on them will be discussed in detail below.
The first tab Job , the appearance of which is shown in the figure at the very beginning of the article, contains basic information about the scheduled task. These are:
- Path to the file in which the task settings are saved. In the figure, this path is C: WINNTTasks Testing the Scheduler.job
- In the field Run , the path to the program that is launched according to the schedule is specified. In this example, it is C: Scheduler est.bat
- Button Browse allows you to change the path to the program being launched (if the program file has been moved) or select another program.The same can be done manually by entering the path and name of the executable file in the Run field located above the Browse button.
- Next field Working folder contains the path to the folder where the program, which is scheduled to run, stores its files and generates reports on work, if the path to the reports is not specified in the program itself. By using the value of this field, you can redirect reporting to a different folder than to C: Scheduler, as in the example.
- Text field Comment allows you to store a description of the task being started.This is convenient when several people are working at the computer so that there is no need to deal with what the task was created for.
- The next field User contains the username on behalf of which the task will be launched. This value was discussed in detail at the end of the previous chapter.
- Button Set password is used to change the password if the username has been changed in the corresponding field. There is no practical use for this button, becauseBecause when you try to save changes in the task (including the username), you will be prompted for a password.
- Jackdaw Enabled … serves to enable or disable tasks. Clearing it will result in the task being stored in the folder Scheduled Tasks , but will not be executed. This is useful when the task is performed from time to time on a complex schedule. Disabling a task temporarily helps you avoid having to delete it and create it when you need to run it again.
The appearance of the next tab Schedule :
This tab provides all the possibilities for managing the schedule for running a scheduled task, which are much wider than those that were configured in the New Task Wizard and which were discussed in the previous chapter. Let’s consider all the settings in order:
- At the top of the tab, the current schedule for the scheduled task is displayed.
- In the combo box Assign task , you can change the type of the startup schedule.All types were discussed in the previous chapter, so we will not dwell on them.
- Field Start time contains the time at which the task will run.
- Button Advanced opens the window shown below.
There are several additional schedule settings in this window, which are worth mentioning separately.
- Start date . Allows you to specify the date of the first start of the task, after which the task will be executed according to the specified schedule.This field can be used when the task is to be started on a specific date, and not immediately after creation.
- End date . This field allows you to specify the date when the job was last run. After the date specified in this field, the task will not be executed. To make it possible to specify the end date, you need to check the corresponding box.
- Field group Repeat job . If you put the checkbox of the same name, then it will be possible to configure the interval for starting the task.For example, you can run a task every 30 minutes for 8 hours, or until a specific time. This is a very convenient opportunity for solving certain problems of event monitoring. For example, sometimes it is required to check for the presence of a file in a certain folder with a period of five minutes during the working day, and if it appears there, then run a script that will perform certain actions on the file.
Checkbox Stop jobs is intended to end the running job when its execution time has expired.For example, field Run until is 18:00. If the task is very large and takes half an hour to run, then launching it at 17:55 will cause the task to actually run until 18:25. If this state of affairs is undesirable, then you need to check the box Stop job . In this case, the task will be terminated at 18:00, no matter what.
- The next group of settings relates to the selected type of task start, and is different for each type of start.All types of their settings were covered in the previous chapter.
- Jackdaw Show multiple schedules opens up the broadest possibilities for flexible scheduling for one task. If you set it, a combo box appears at the top of the tab, which lists all the schedules created for the task and the buttons Create and Delete for managing schedules.
Creation of several schedules can satisfy almost any request in terms of the time and frequency of the task launch.For example, you can configure the task so that it will run on Monday at 19:00 every 10 minutes for 2 hours, on Tuesday at 20:00 every 30 minutes for an hour, on Wednesday at 14:00 every 5 minutes until 20:00, etc. As it is already becoming clear, to implement such a fine tuning of the task launch, you need to create a schedule using the Create button. Each schedule you create is added to the combo box at the top of the tab. To configure or delete one of the schedules, select it from the list and then either configure it as described above, or delete it using the appropriate button.
To illustrate the steps described, I created a schedule that will run the task on weekdays at 9:00 pm and on weekends at 9:00 am. To do this, I created two schedules:
The last third tab The setup shown below contains additional job settings.
- Checkbox Delete task , if there is no recurrence according to the schedule, is intended to delete “one-time” tasks from the folder Scheduled tasks in Control Panel .Checking this checkbox will delete the task if its launch schedule does not provide for its subsequent launches.
- Field Run no longer than allows you to specify the maximum time to complete the job. You can force the execution to stop if the job takes longer than the specified time.
- Setting Group Idle Time allows a job to be started only if the computer has been idle for some time. Useful for scheduling the launch of resource-intensive tasks that take up all the CPU time during their execution, loading it by 100%.
- The first two checkboxes in group Power Management allow you to specify whether to run a scheduled task on the laptop if it is running on battery power. The last, third daw, allows you to “wake up” the computer when it is in standby mode (low power consumption mode, when almost all computer devices are turned off, but it is turned on and restores its work when it wakes up from standby mode for a couple of seconds, and all applications that were open at the time of entering standby mode are not terminated, and are also waiting for the computer to wake up from standby mode).
To save all the changes made, press the button OK , after which you will be prompted to enter a password for the user whose name is indicated on the first tab.
Be careful when entering your password. If the password is entered incorrectly in both fields, then no warning about this will be issued, and the scheduled task will not start at the specified time.
The following chapter provides examples of the most common schedules for running jobs using the Scheduler .
Now let’s look at several options for scheduling the configured task. All schedules run test.bat file from C: Scheduler. I first created a task using the wizard, and then in the Scheduler window , I right-clicked on it and selected Properties .
On the first tab Task I did not make any changes, so it will not participate in the description of the schedules. Last tab Customizing will also not figure in the examples, with the exception of one “one-time” job.A list of schedule options, which are discussed below:
Run the task daily
The first configuration example shows how to configure the task to run daily at 21:00. The simplest version of the schedule, which is created by the wizard without further adjustment. Tab Schedule in Properties of the created task looks like this:
Such a schedule, in my opinion, does not require any additional comments. The task starts every day at 21:00.
Run the task every other day
This schedule differs from the previous one only in that it runs every two days.
As you can see in the screenshot, in the field Schedule by days is set to – Every 2 days. This will cause the task to run every two days. You can enter any value in this field.
Delayed daily task run
In this example, the task is scheduled to run daily, but its first run is postponed by 10 days.
As you can see, in Advanced Settings , the start date for the job is set to the 17th, while the job was created on the 6th. Thus, we created a task, but postponed the start of its daily launch by 10 days.
Run a task on a daily basis until a certain date and then delete the task from the Scheduler
With this schedule, the task is launched daily until a certain date. When the last run date is reached, the task is removed from the Scheduler .
This schedule runs the task every day from the 7th to the 15th, as indicated by Additional settings shown in the figure above. On the tab, Settings , it is shown that the checkbox Delete task is set. Setting this checkbox will lead to the fact that the task, after the last launch, will be deleted from the Scheduler .
Launching a task on different days of the week at different times
This example demonstrates how to configure the schedule for launching task by Scheduler on different days of the week at different times.
Using this schedule, the following task is implemented. The task runs on weekdays at 9:00 pm and on weekends at 9:00 am. On the tab Schedule , the checkbox Show multiple schedules was set, then working days were ticked and the task start time was set at 21:00. Then, using the Create button, a second schedule was created, the type of its launch was changed to Weekly and checkboxes were set around weekends.Now the task will run at different times on different days.
You can create more schedules to fine tune the start time of the task on different days, up to the creation of seven schedules to configure the schedule to run the task at different times of each day of the week.
Run the task every minute during weekdays on weekdays
This schedule runs the task every minute during a weekday and only on weekdays. Such a schedule will be useful for checking the presence of a file in a specific folder from the script.For example, branches every day at different times upload to ftp in the central office reports on the work done for the previous day, which should be automatically unzipped and imported by a script into the corporate database. The scheduler runs a script that checks whether a file has appeared in the specified folder, if it does, it performs the necessary actions with it, and if there is no file, the script ends.
The schedule is scheduled to start every weekday at 8:00. Additional Settings indicates that the task will run over and over every minute until 19:00.Thus, the maximum delay in processing the file that appears in the folder will be seconds.
Monthly task run
This example shows how to configure the task to run once a month. As a rule, such a frequency is needed to run scripts that analyze logs for the past month, create statistics and put the logs in the archive.
The task is scheduled to start in the first minute of the new month. This is due to the fact that there can be 28, 29, 30 or 31 days in a month.In order not to create your own schedule for each month, it is more logical to perform the task on the first day of each month.
Schedules based on these examples will be able to run the task at exactly the time, as often and on the days when it is required to solve the task. The settings of the Scheduler are so flexible that you can create an arbitrarily complex schedule. Using the Scheduler allows you to abandon the use of utilities that may be unstable, will take up part of the system resources, or will not have the required flexibility of settings.
In the next chapter, I will briefly discuss how to test the created task to make sure that it runs smoothly the first time.
It is not enough to create a task and write a program or script that will be launched by the Scheduler . It is imperative to perform a test run of the task to make sure that the task works exactly as planned and that there are no problems when it starts. This is quite simple to do.
Right-click on the created task in Scheduler and select Run .The task will run immediately, regardless of the schedule.
The most common errors when creating a task that lead to a task not starting at the specified time or to a failure during a test run of a task is an incorrectly entered password. No less often, you may encounter the fact that the task does not start due to an empty password for the account. Windows 2000, unlike Windows XP, will run the task under an account with an empty password. In XP, the task will not start. The solution to this problem is to set a password for the account and specify it in the task settings.
Another error can be caused by the fact that the path to the program or script launched by the task is specified incorrectly. Including, if there are spaces in the path to the program or script being launched, then the path must be enclosed in quotes.
Another problem with the launch of the scheduled task can be caused by the not started service of the Scheduler . Note that if there are scheduled tasks, then the service start mode Task Scheduler should be Auto .If the service start mode is Manual , then the task creation wizard will start without problems, since Windows will start the necessary service on its own when you open the wizard, and after a reboot, the service of the Scheduler will not be started and the task will not be executed. How to check and set the startup mode of the service Task Scheduler to Auto was described above.
If the test launch of the task is successful, the value 0x0 will appear in the main window of the Scheduler , in the column Past result .This indicates that the launch of the task was successful. If the launch of the task for some reason failed, then in the column Status will be written “ The launch failed “. To find out the reason for not starting the task, open the log of the Scheduler .
The figure shows menu Advanced in Scheduler . To open the work log, select the bottom menu item View log . An example of a logged error is shown below.
“Testing the Scheduler.job “(test.bat) 03/14/2004 20:51:20 ** ERROR **
An attempt to use the job account failed.
therefore the task was not completed.
Specific error:
0x8007052e: Login failed: username or password not recognized.
Please check the correct username and password and try again.
To get this error, I deliberately entered the wrong user password in the task and ran it using the command Run as described above.
Thus, using the work log of the Scheduler will help you quickly eliminate the reason for not starting the job.
The work log is in the SchedLgU.Txt file, which is located in the systemroot, i.e. in the folder where Windows is installed. The encoding of this file is Yunikod.
Box Scheduler Status can contain values that are explained in the table below.
Empty | Job is not currently running or completed successfully |
Running | Job is currently running |
Skipped | |
Start failed | The last attempt to start the task failed |
Successful test launches of the task do not obviate the need for constant monitoring of the task execution.Having entered the “combat” operation of the task, do not forget to periodically review the log and Scheduler . A more reasonable solution is to create a log file of the task itself and open this log after its completion. For example, coming to work in the morning, you will see an open editor window with a log file. This will force you to involuntarily view the result of the task, and the absence of an open window will mean problems that have arisen either when starting the task, or during its execution. This will quickly fix the problem.
It is often necessary to perform certain actions automatically on the computers of users in the local network. The administrator can manage tasks Scheduler on users’ computers remotely, over the network. The account that will be used to manage tasks Scheduler must have the rights Administrator on the user’s computer.
To create, modify or delete a task on a user’s computer over the network, open his computer through Network Neighborhood .
Other shared resources on the user’s computer include Scheduled Tasks . Open this folder. An example window is shown in the following figure.
To create a new task on the user’s computer, right-click and select New Scheduled Task from the menu, as shown in the image above. After that, a new empty task will be created on the user’s computer. The wizard will not start because the job is created over the network, not locally.After entering the name of the new task, you need to configure its properties. This can be done in the window that opens by right-clicking on a new task, when you select the Properties menu item.
When configuring a job, remember that the job is configured remotely, not locally.
All parameters of the job that is configured remotely are identical to the settings for the local job, and were described above.
Windows XP and later include the schtasks command-line utility that allows you to manage scheduled tasks on computers on your local network.This utility will be indispensable for managing tasks from scripts. You can get help on the keys for running this utility by typing
schtasks /?
Yarn Power Planner Practice Planner
Articles Catalog
Brief Introduction
Capacity Planner is the most commonly used mode in industrial practice. Let’s practice using it on a stand-alone computer today. I chose 3.1.2 for hadoop version and 2.4.3 for Spark.
Configuration
- yarn-site.xml
Here’s the main configurationyarn.resourcemanager.scheduler.class
Property ok.
yarn.resourcemanager.hostname
dc-sit-225
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
yarn.resourcemanager.webapp.address
0.0.0.0:8081
yarn.nodemanager.resource.memory-mb
18432
Available memory of each node, in MB, 8 g by default, here set to 18 g
yarn.scheduler.minimum-allocation-mb
1024
One task can be used for a minimum amount of memory, the default is 1024 MB.
yarn.scheduler.maximum-allocation-mb
16384
One task can be used for maximum memory, default is 8192 MB,
- capacity-scheduler.xml
We configure the three default queues accordingly: api, dev
Xml version = "1.0" encoding = "UTF-8"?>
yarn.scheduler.capacity.root.queues
default, api, dev
yarn.scheduler.capacity.root.capacity
100
yarn.scheduler.capacity.root.acl_administer_queue
root
yarn.scheduler.capacity.root.acl_submit_applications
root
yarn.scheduler.capacity.root.default.capacity
30
yarn.scheduler.capacity.root.default.maximum-capacity
35
yarn.scheduler.capacity.root.api.capacity
45
yarn.scheduler.capacity.root.api.maximum-capacity
50
yarn.scheduler.capacity.root.dev.capacity
25
yarn.scheduler.capacity.root.dev.maximum-capacity
30
yarn.scheduler.capacity.root.api.acl_administer_queue
root, hadoop1
yarn.scheduler.capacity.root.api.acl_submit_applications
root, hadoop1
yarn.scheduler.capacity.root.dev.acl_administer_queue
root, hadoop2
yarn.scheduler.capacity.root.dev.acl_submit_applications
root, hadoop2
yarn.scheduler.capacity.resource-calculator
org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
- sbin / start-yarn.sh and sbin / stop-yarn.sh
If this is a root start, add
to the beginning of the appeal file
YARN_RESOURCEMANAGER_USER = root
HADOOP_SECURE_DN_USER = yarn
YARN_NODEMANAGER_USER = root
examination
Let’s start with sbin / start-yarn.sh and then we use jps to check if the ResourceManager and NodeManager exist. If not, you need to check the corresponding log. Then we open yarn web interface, default port is 8088, 8081 I configured here. Click Scheduler. If you see three nested queues in the root directory, it means our configuration is normal.
After that you can point the queue to yarn via spark-shell.
[[email protected] spark-2.4.3] $ bin / spark-shell --master yarn-client --queue dev
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
2020-04-01 16: 20: 13,878 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel (newLevel). For SparkR, use setLogLevel (newLevel).
2020-04-01 16: 20: 18,334 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http: // dc-sit-225: 4040
Spark context available as 'sc' (master = yarn, app id = application_15857292_0002).Spark session available as 'spark'.
Welcome to
____ __
/ __ / __ ___ _____ / / __
_ \ \ / _ \ / _ `/ __ / '_ /
/ ___ / .__ / \ _, _ / _ / / _ / \ _ \ version 2.4.3
/ _ /
Using Scala version 2.12.8 (Java HotSpot (TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type: help for more information.
scala>: quit
[[email protected] spark-2.4.3] $
Then check the user interface and you will find that there are tasks in the development queue that run fine.
Anomalous
The following are the main exceptions encountered in the process.
2020-04-01 15: 09: 25,370 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file: /data/server/hadoop-3.1.2/etc/hadoop/capacity- sch
eduler.xml
2020-04-01 15: 09: 25,380 ERROR org.apache.hadoop.conf.Configuration: error parsing conf [email protected]
com.ctc.wstx.exc.WstxParsingException: Illegal processing instruction target ("xml"); xml (case insensitive) is reserved by the specs.at [row, col {unknown-source}]: [2,5]
at com.ctc.wstx.sr.StreamScanner.constructWfcException (StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError (StreamScanner.java:491)
at com.ctc.wstx.sr.BasicStreamReader.readPIPrimary (BasicStreamReader.java:4019)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog (BasicStreamReader.java:2141)
at com.ctc.wstx.sr.BasicStreamReader.next (BasicStreamReader.java:1181)
at org.apache.hadoop.conf.Configuration $ Parser.parseNext (Configuration.java:3277)
at org.apache.hadoop.conf.Configuration $ Parser.parse (Configuration.java:3071)
at org.apache.hadoop.conf.Configuration.loadResource (Configuration.java:2964)
at org.apache.hadoop.conf.Configuration.loadResources (Configuration.java:2930)
at org.apache.hadoop.conf.Configuration.getProps (Configuration.java:2805)
at org.apache.hadoop.conf.Configuration. (Configuration.java:822)
at org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration. (ReservationSchedulerConfiguration.java:64)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration. (CapacitySchedulerConfiguration.java:374)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.FileBasedCSConfigurationProvider.loadConfiguration (FileBasedCSConfigurationProvid
er.java:60)
The above is because I copied the config on the web, it may be affected by some space encoding or tab key.I am deduplicating the spaces in each xml line and then manually formatting it.
2020-04-01 16: 14: 03,503 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Failed to initialize spark2_shuffle
java.lang.RuntimeException: No class defined for spark2_shuffle
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit (AuxServices.java:274)
at org.apache.hadoop.service.AbstractService.init (AbstractService.java:164)
at org.apache.hadoop.service.CompositeService.serviceInit (CompositeService.java:108)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit (ContainerManagerImpl.java:318)
at org.apache.hadoop.service.AbstractService.init (AbstractService.java:164)
at org.apache.hadoop.service.CompositeService.serviceInit (CompositeService.java:108)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit (NodeManager.java:477)
at org.apache.hadoop.service.AbstractService.init (AbstractService.java:164)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager (NodeManager.java:933)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main (NodeManager.java:1013)
Calling because I am yarn.nodemanager.aux-services
More settings in properties spark2_shuffle
Reason, remember that the previous version of hadoop had to add this config.Now it seems only needed mapreduce_shuffle
That’s all.
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
2020-04-01 16: 18: 32,393 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel (newLevel). For SparkR, use setLogLevel (newLevel).
2020-04-01 16: 18: 36.924 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Yarn is stuck in this position after submitting the issue, but this will not be possible with Spark’s own master. There are no emergency logs for resources and Spark Master. We looked for the reason and found out that the NodeManager does not start normally, so yarn does not have nodes to assign tasks. yarn node -all -list
The total number of view nodes is 0, so testing needs to be done after each step.The reason the NodeManager did not start successfully is in exception 2. spark2_shuffle
The reason for the configuration.
end
The above uses the multi-queue designation to enable resource isolation. But user-level isolation must also rely on identity authentication components such as Kerberos.
Configuration Help https://www.cnblogs.com/xiaodf/p/6266201.html#221
90,000 Task Scheduler! How to remove ads in the browser Chrome, Opera, Firefox, Yandex, Explorer
In this issue, I will tell you one important thing, which I did not mention in one article on the topic “How to remove ads in the browser Chrome, Opera, Firefox, Yandex, Explorer”.If you read it, did everything step by step and you still get ads, then it’s time for you to get acquainted with Task Scheduler in Windows.
And so … One of the main opportunities for showing you ads is the Task Scheduler , which is available on all Windows systems and the average user does not know about it. It works very simply, you tell it a program that needs to be launched at a certain point in time or at a certain event, and when such an event or time occurs, the program is launched by the scheduler, i.e.That is, it’s like an alarm clock, at the right time it rang, but more advanced. Currently, the scheduler , the application and the extension are the three main tools for displaying ads on your computer.
Unfortunately, the Hitman and Malvarebyte programs, which I talked about in the previous article, do not check the list of programs in the Scheduler, so even if you cleaned the system of malicious applications and extensions in your browser, sites with ads will open without any problems.
Let’s get down to practice and do everything step by step.
1. What needs to be done? This is to open the Task Scheduler. This is done simply: press the key combination Windows + R and in the Run window enter the command Taskschd.msc .
The Task Scheduler will open. To call the scheduler, you can use another less advanced method, for example, in Windows 7, you can select the Start menu and enter the word “ Scheduler ” in the search bar, the same can be done in Windows 8, 10, and when the found application appears, click on the scheduler …
2. When the scheduler starts, you need to collapse tab Task Status and expand tab Active Tasks .
And now the most important thing is that in active tasks you need to find the task that launches ads on your computer. There can be many tasks on the list, but don’t be alarmed, in fact, you can check them all quickly.
3. Checking the list of tasks. If you are an inexperienced user and do not know what programs are installed on your computer and which of them can be launched by the scheduler, start checking from the very first one.In my case, I know that Adobe Flash Player, Avast, Google, SafeZone Avast are standard applications that work on most computers and through them, ads are unlikely to spin. But below the task top5newsorgenor , which is launched at regular intervals, is clearly suspicious. The following task youfreenewsnetncomsm is also suspicious.
4. After a suspicious task is found, click on it and in the next window, see that our task is highlighted, go to section Actions and see what kind of action this task performs, if this action is like in my example, link to the site , then this is just the launch of sites with advertising, for example, a casino.
There is also such an option when the action launches the browser, for example, as in my case, this is the Firefox browser – this should not be either, since it makes no sense to launch the browser at a certain time. Moreover, in my case, if you look closely at the link that launches the browser, you will see that it is launched with add. parameter , namely with a link to a certain site.
If, for the sake of example, I launch such a task, then the firefox browser opens and this site opens automatically, and sites on such resources can dynamically change, then Volcano, then girls, etc.d.
5. Now that we have made sure that this particular task opens sites for us and shows advertisements, move the mouse pointer over the task in the column File , press the right mouse button and select the command Delete . Or it can be done via the menu on the right Selected item – Delete . The job will be deleted.
6. Click on the arrow button in the upper right corner “ Go one step back “.Now you go through the entire list of tasks in search of such links either to the browser, or to the site or to suspicious applications (but I think if you looked at the article “How to remove ads in the browser” and cleaned the system using Hitman or Malwarebytes programs, then in the system will not have applications, but the tasks may remain). Take a special look at the applications where, in the column “ Next start time ” the time is set, as well as in the column Triggers – where the task execution time is also set). Delete anything that you find suspicious, namely links to sites and browsers with add. parameters in the form of links. Do not delete tasks for updating browsers, such tasks also have links to launch the browser, but in add. parameters (that’s all after the link) do not have links to the site.
7. After clearing the list of tasks, press the button Refresh . Go over the list again. If according to assignments – everything is clean, i.e. no strange tasks, then close the Scheduler.
8. Restart your computer.
Now enjoy an ad-free Internet. And if after some time something like this appears again, do not forget about the Task Scheduler and everything will be in order for your computer.
Also pay attention to the fact that if you do not clean the computer from malicious applications and extensions, but only clean the Scheduler, then this will be ineffective, because after a restart, if an application that is spinning ads is left on the computer, it will again register the task in The planner and everything will have to start over.How to remove ad applications and extensions see the previous article “How to remove ads in the browser”.
That’s all for me. Write in the comments what you got, your solutions, and this of course will make life easier for other people who are faced with a similar problem.
Thank you for your attention, I wish you all good luck and see you again.
Watch the video – http://vizivik.ru/planirovschik-zadaniy-kak-ubrat-reklamu-v-brauzere/
Failover Task Scheduler Architecture.Yandex Report
Yandex has tens of thousands of machines that are constantly loaded to the eyeballs with various computing tasks. Most of these calculations are related to the so-called batch workload, usually in the form of operations in the MapReduce paradigm. We use our own system
YT
, which provides distributed storage and an interface for launching distributed computing with arbitrary user code. In my talk, I talked about the tasks that arise when trying to write software that will plan something on clusters of a large number of machines.
– Let’s first discuss what Yandex’s computational clusters are doing.
They digest a large amount of data, tens of thousands of machines are constantly running. For example, search databases are being built, each of which requires to grind tens of petabytes of data, only so that we can see fresh information, entering the search every day.
In total, the order of exabytes is stored, that is, a billion gigabytes of historical data. All this data must be stored and processed somewhere.It is clear that in order to process them, a large amount of processing power is needed. And for all this to work, to be able to use it, an appropriate infrastructure is needed.
I am working on a product called YT.
We did not speak much with this development at conferences, but we are correcting an annoying oversight. Therefore, I will now quickly bring you up to date. YT is an internal development of a company that combines many different products.
First of all, there is a part that is a distributed storage, a storage of this information. This part is most similar to the outside world products like HBase and ZooKeeper from the Apache stack.
Next, there is a computing framework that allows you to perform calculations on data in distributed storage, and to do this in a paradigm that is based on MapReduce. But it is clear that since 2004, when MapReduce was introduced, the industry has stepped forward, so we do not have MapReduce in the sense of the 2004 article, but a much more developed concept.A close equivalent from the outside world is Hadoop.
Next, we have a horizontally scalable kv-storage that allows us to keep the realtime load. It is possible to run code in a distributed environment like YARN does. And the ability to do on top of the data that lies in YT, various analytical queries through a higher-level interface. For example, a language that is very close to SQL. The product is called YQL, and we also talked about it once.
YT operates on fairly large clusters. For example, a typical large cluster contains tens of thousands of machines that directly store and process data. Schematically on the slide you can see an image of how our cluster is arranged.
This is all controlled by service machines, which are conditionally divided into two categories – masters who directly manage distributed storage, that is, the part that in our terminology is called Cypress, Cypress in Russian.
These machines store all the information you need to understand where the different parts of the tables are. And there are schedulers, schedulers. There are also about ten of them. Today, for the most part, I will talk about them.
What is the batch load that appears in the title of the report? It has the following properties.
Firstly, it is characterized by the fact that it is typically characterized by high throughput, that is, high throughput.With it, you can process data as quickly as resources allow.
If you have twice the compute quota, then you expect your computation to be twice as fast as a first approximation. This is not a real-time load, but a load that can work for minutes, hours, days, perhaps weeks.
And of course, such a load does not arise due to the fact that the user visits a web page, clicks on a button, and then waits for the result.It is unlikely that he will wait an hour while the operation is considered to receive an answer.
Such calculations are well described in the concept of MapReduce, which quite a few people, I’m sure, are familiar with directly. Therefore, the concept of MapReduce has gained popularity – because it really well allows you to lay out primitive bricks in batch calculations, which often arise in large companies. And in smaller companies too, and even in small products.
How to plan a batch load? What needs to be done to efficiently utilize the capacity of a data center, a cluster in which you have hardware, how to utilize them using a batch load?
I will illustrate what a typical batch load looks like.It all starts with the fact that there is a certain number of tables scattered over dozens, hundreds, thousands of machines in our cluster. These plates are transformed, transformed into one another with the help of some primitive bricks – such as the merge operation, sorting.
Not typical, but possible times are shown in the picture. Let’s say a separate brick of sorting type can take, say, an hour of real time. Therefore, you need to utilize a hundred CPU hours, because this operation is done with a high degree of parallelism.The subsequent operation can take 20 minutes of time, also tens of CPU-hours, and so on. In order to utilize CPU time with great parallelism, it is necessary that all these calculations occur in parallel, contain parallelism.
Let’s agree on terminology. I will speak in terms that are specific to our system. They are slightly different from those generally accepted in Hadoop, and perhaps it will be a little confusing. But I will now speak out all the terminology.
Let’s start by introducing the concept of an operation. An operation is a complete logical block that transforms a set of tables, some other set of tables, according to some principle. For example, it can be an operation of sorting by key, an operation like map – transforming lines, or some other operation that is in our model.
The operation converts whole tablets and consists of small individual blocks. Such blocks are called jobs, each job is one independent part within an operation that processes its own portion of the input data and receives its own portion of the output data.In this case, a job is an entity that runs as one process on one machine in a cluster.
Accordingly, the output of the operation is the sum of the output of all jobs launched in this operation.
Schematically it looks like this. We have a scheduler process. He knows that he needs to start some kind of operation. Starting an operation that converts a certain input table to an output one, the scheduler splits it into a certain number of jobs.
Let’s say in the picture he breaks it down into four jobs, each of which independently processes its own portion of the input data.Then the jobs go to the computational nodes of the cluster and work independently of each other. They subtract parts of the input table and get the corresponding parts of the output table.
Let’s talk about the characteristic numbers associated with jobs. How long does a typical job run? It should work for about a minute. Why? Why, say, not one second? Why can’t it actually work for one second? Because in such a distributed environment, the question arises – how to deliver the code written by the user and to which he wants to process his data to the machine where the code will be executed? At the very least, it needs to be distributed somewhere.
There may be additional overheads associated with working with distributed storage. In practice, when it comes to batch workloads, jobs rarely work very quickly. They work on the order of a few minutes, and trying to speed them up further is pointless, because most of the time will be spent on overhead, and not on the useful work itself inside the job.
Accordingly, from a typical work time, about one minute, it turns out that a typical job manages to process about a gigabyte of information during its life – to drag it through itself and transform it.
A typical job uses only one core, although there are, of course, jobs that, for some reason, rely on multithreading within themselves and utilize more than one CPU core.
Let’s assume that the cluster, which I will be talking about today, has about several hundred thousand cores. From all this, the following entertaining arithmetic is formed.
If there are, say, a hundred thousand cores, and a typical job lives for about one minute, then about several thousand events of the form will occur per second: some job has ended, which means that significant resources have been freed up on this machine.
And, of course, we would like all the cores to be busy with something at any given time. Suppose the scheduler does not have time to respond to events, that some resources are freed, and at the wrong time informs what needs to be done with these cores next. Consequently, they are not busy, the hardware in our data centers is not utilized, and we are losing money.
That is, you can introduce an important requirement that we expect from the scheduler we are designing: it must be efficient, efficiently utilize resources in data centers.
Most of the time the scheduler looks like this. He orchestrates a wild amount of events that occur around him. His machines disappear from the cluster, individual jobs do not work, and all this happens with such intensity that it is really not a trivial task to have time to react.
Let’s talk a little more about jobs. Jobs must satisfy the following properties:
– They must be stateless in the sense that all the logic they transform into information must be hardcoded into the input for that information.It shouldn’t come from somewhere else.
– They must be deterministic – give the same result no matter how many times we run them.
– They shouldn’t have side effects.
The combination of these requirements allows us to run jobs as many times as we want, to run several copies of the same job in parallel, if for some reason we wanted to. And in general, it unties our hands to run jobs the way we want.
Let’s talk about another important requirement – the requirement for fault tolerance. What is the operation? These are some jobs that run roughly the same code and convert input tables into outputs. The operation can be long. Unlike a job, it can run for hours, days, and even weeks, perhaps even months.
Of course, while the operation is running, there may be some trouble with the scheduler process.
For example, he may go to service. This is the same code as any other and needs to be updated periodically. Or you need to update the machine on which the scheduler is running. An unplanned trouble can happen to him. For example, the machine on which he works may fail, or he may fall due to a bug.
All these situations happen, and when this happens, I don’t want the operation, which ran a week before, to go down the drain and we lose everything that it was doing.I don’t want to restart it from scratch. The requirement that the scheduler must survive a crash, a shutdown, or a process switchover can be called fault tolerance.
This is how the situation looks from the scheduler’s point of view. Suppose two operations are running, so they are running side by side. Today we will see quite a few of these pictures. Separate horizontal segments are jobs that are launched as part of the operation. The duration of a job is on the order of a few minutes, as I already said.The operations themselves can be quite lengthy. Let’s say the upper operation ran, it ran for about two hours.
The second continues to run … Bam! A scheduler switch occurs because the scheduler process has crashed. This can happen. By this time, the second operation had been running for ten hours. We don’t want everything she counted to be lost. We are interested in how to recover from this situation.
The third important requirement is the requirement for the planner to be honest.I won’t talk much about it today, but if, in general terms, the planner distributes resources to consumers, which means that he must do it honestly. If some consumer has more quota or more resources in some terms, then he should get more machine time for the work of his code. As a rule, there are many more people willing to receive resources than the resources themselves. And in order to distribute these resources honestly, so that at the moment the desired distribution of resources between consumers is approximately observed, it is necessary to use various complex techniques.
We use an algorithm that belongs to the fair share scheduling family. But, again, today I won’t talk much about it. I hope my colleagues who are doing this will also give a talk about this, stay tuned.
A picture about honesty:
Different consumers can ask their wishes in completely different terms. And from this it becomes finally difficult. Someone may say – I want myself 40% of the entire cluster. Someone – I want at least 50 thousand cores.The third – I have 150 video cards. Someone else – I’m not entitled to anything, but I still want to calculate something, please give me the resources. It’s not trivial.
So what do we want from the rest of our story? We want to build an efficient, fault-tolerant scheduler.
Let’s start discussing some model in which the scheduler will work.
First, let’s understand that he must remember about each operation. We will introduce the term “operation controller”, which describes all the state that the scheduler remembers about the operation.
First of all, the scheduler must remember information about what, in fact, this operation is doing. In fact – information about how the operation started. This information is passed to the scheduler by the user when he wants to start an operation.
This information contains what code needs to be run, which tables we want to process, which tables we want to receive in the output, say, the addresses of these tables in our storage. There are also settings related to a specific operation: if it is sorting, then by what key we sort our table.
The whole set of information that the user gives us to start an operation, I will call the specification of the operation.
The scheduler must remember which input data it has already processed, which it has not yet, which has yet to be processed, which output data has already been generated by the jobs that have been run, which jobs are running in the operation now, and if they are running, on which machines and what data they are processing …
All of this information is part of the status associated with a specific operation.And we will call this state a controller. It is a data structure in memory of the scheduler. If the scheduler does not know all this information about the operation, then it will not be able to orchestrate this operation so that it will continue to work successfully, so that jobs are launched, so that progress occurs.
Moreover, this information is not static at all, it changes over time, and it is convenient to think about the aggregate of this information as about a state machine – about an automaton reacting to a certain influence from the outside world.
What kind of impacts are there? For example, this. The cluster machine comes to the scheduler and says: “I have free resources, five CPU cores and 80 gigabytes of RAM. Would you like to launch a job that can work for me in such circumstances? ” Planner: “Yes, I want to. Come on, you run this binary, send such and such parts of such and such plates to the entrance, and everything will be fine. ”
The information he gives in response is the specification of a particular job.Or he can reply in response: “No, you know, I don’t have something. Now, no useful work for just one core and only one gigabyte of memory. So come on, wait further. ”
There may be an event of the form: some job has finished working. He could have finished working because he fell or worked. In the second case, it probably also generated an output – which is also part of the event.
It is also convenient to consider such an event: we have finished running all the necessary jobs, we no longer have any raw input data.In this case, you need to finalize the operation, collect output tables from all that the jobs have generated.
I also note that of these three events that I have outlined, two are initiated by the outside world in relation to the scheduler and come from the side of the node on which the jobs are launched.
The last event actually follows from the internal state of the scheduler. When the scheduler realizes that the operation has finished running, it triggers an event that the operation ends, comes to a logical end.
If we want an illustration, this is it. We have a controller in which the input consists of two squares: red and blue. Then events arise: for example, an event has come that we want to schedule a job (we will continue to say “start” from the English schedule). And we say: okay, let’s give the blue square to job “A”. Then comes the event that, job “A” is over, this job has transformed the blue square into a blue triangle. Further, in the same way, we will plan a job that will process the red square.The red square will turn into a red circle. Finally, the operation will end, and the operation controller will undergo a sequence of a number of states, each of which is obtained under the influence of the next event.
This model has one very nice property: controllers can live in several different schedulers, each of which is equal to all others. I can spread the action controllers evenly across, say, the five machines running the scheduler process.
I will have five planners. They will independently handle events coming to them from the outside world. And if for some reason they will not have time to do this, say, will not meet the requirement of efficiency, then I can make not five cars, but ten.
I get horizontal scalability, which is a very nice feature because it solves the problem of scheduling efficiency. Let’s say, on cluster machines, jobs from different schedulers can run.And these machines will go with their events to the scheduler processes that these jobs have placed on them.
Okay, let’s try to imagine the most basic scheduler implementation. It may sound like this: let’s keep the state of each operation in RAM and react to these very external influences.
Of course, this implementation does not stand up to criticism, because it does not solve the problem of fault tolerance in any way. Suppose the scheduler crashes: RAM is not the most reliable storage.Then I will lose all my progress information. Even worse, I will lose even knowledge of what operations ran in the previous incarnation of the scheduler. I won’t even be able to restart them.
From the user’s point of view, it will look like this: he started the operation, came the next day and found that the scheduler didn’t know about his operation either by sleep or in spirit, didn’t finish it at all and didn’t even know that it was on him when I ran. It is unpleasant. Let’s fight this problem.
We need a place.For example, let’s first save the specification of the operation somewhere – say, to disk. Disk is a bit more reliable storage than RAM. But you still have to solve the problem that the disks also fail, that if the entire machine on which the scheduler was lying is gone, and not just one process, then we will also lose the disk on this machine.
To solve these problems, we use another part of our system that I started with. It’s called Cypress or Cypress.This is reliable distributed storage, into which we add part of the scheduler state. Let’s keep the specification of the operation in Cypress. If we do this, then we can implement the following approach.
When the new incarnation of the planner wakes up, she looks around, looks at which operations were running, reads their specifications from Cypress and starts to execute them from scratch. This is already something. We, at least theoretically, can someday finish this operation.
In the picture it looks like this. We start by writing the specification of this operation into Cypress at the start of the operation. Jobs start to run … Bam! The scheduler switches.
A new incarnation of the scheduler wakes up, reads the specification of the operation that was. He learns: aha, there was such and such an operation. Unfortunately, I do not know what I managed to do in my previous incarnation, but I can start doing the same.
Then he starts running the same jobs in the same way, in order.And this time they come to success.
This solution somehow works, but, of course, also not very practical: if every time the scheduler switches, we will not have to lose something, but start first counting what we could have already spent hours on or days, it’s unpleasant.
From real practice: the characteristic time that the scheduler process lives is ten days. If you look at the perspective of, say, a month, then during this time we will either update it, or drop it, or someone else will drop it.
Something needs to be done, because some operations take longer than ten days. If the operation works by itself for a month, and this also happens, then she has a chance to find herself in a situation reminiscent of Groundhog Day. Namely: the operation is running, there is a switch, she said: okay, I’ll start over. Runs, switching again. So she will never reach success, she will run forever.
Solution needed. And the solution in our case is to use snapshotting.
We will periodically save more than just a BOM from state. We will save the entire controller state to a safe place.
The general idea is clear. There is a controller – a data structure that we described with you a few slides earlier. Let’s all this data – the specification, everything that is now from the raw input and from the output – serialize. Let’s say just a binary dump. And put in Cypress.
Excellent. If a switch now occurs and a new incarnation of the scheduler wakes up, it will be able to read this snapshot, pick up exactly the same copy of all data structures at the time the snapshot was created, and start reacting from this copy.
On this path, there is a difficult technical question: how will these snapshots appear, at what moments will they be written? The controller constantly senses events. If this is a large operation, tens or hundreds of thousands of jobs, then the controller will undergo changes many times every second under the influence of events that the job has ended or needs to be started.
If, in parallel with how these changes are applied, trying to write snapshots, then we will read a constantly changing data structure.If we do it from another thread, then we risk a lot. At best, we can get an inconsistent state, and at worst, we generally get a memory pass and the scheduler process will crash.
We need to think of something.
I will add that for large operations the size of this data structure in the scheduler memory can also be quite large, on the order of a gigabyte or ten gigabytes.
How do you take and serialize something of such a large size, if it is constantly changing? We need an original idea.If you do this by simply stopping all changes from being accepted for a while, it will somehow work, but it will not be very pleasant for the user. Why? Let’s take a look at the picture.
This is the controller’s lifeline. It periodically processes events, responds to attempts to send or complete the job, just keeps the knowledge of what parts of the output the job generated, and says: okay.
And then we’re like, okay, we want to start writing a snapshot. This means that we no longer accept any changes.
At this time, requests continue to come to us, and we have to answer them: sorry, I am not working now, I am writing a snapshot, I can’t send you jobs right now. And no, the job you finished, I don’t remember either. Come back tomorrow.
This situation can be quite long: it may take 15-20 minutes to write, say, ten gigabytes of state to distributed storage somewhere over the network, to a disk, somewhere far away. During all this time, the controller of this operation will reject all attempts to do something with it and will not run any new job.We will be idle for quite a long time, this is unacceptable. When we finally unstick, we will begin to react further, as it was, but 10-15 minutes of live time will be lost. (…)
The fork system call is such black magic that allows us to clone a process, and save the memory state of the parent process in the child, and unchanged. The child, if he reads his memory, will not see the changes that occur in the parenting process.
Also nice that fork is a pretty fast system call.Let’s say you don’t need much more than 10 seconds of real time to fork a process that uses 100 gigabytes of RAM, because fork is implemented through the copy-on-write concept.
If you use fork skillfully, you can come up with a scheme for building snapshots, which does not imply freezing for a long time and not accepting requests for a long time.
Let’s do this. When we want to build a snapshot, let’s freeze all the controllers. We will not accept any changes.But we will do this for a very short time. We will call fork, which will run in literally ten seconds, after which the remaining parent process is like this: aha, the fork has passed, I unfreeze all the controllers again, I start accepting changes, I start playing with new jobs, react to the end of the previous jobs. In general, everything is fine.
At this time, the born child is like: aha, I have a parent’s condition at some point in time. This is a consistent state because there are no controller changes going on right now.Let me start bypassing all these controllers, writing their snapshots anywhere. And I will do it for as long as I want. I’ll spend, say, 25 minutes on all these big controllers. I wanted it – I spent it. He writes them and then finishes his work. Chronologically, it looks like this.
The idle time relative to the main process is literally 10 seconds per fork. The forked process writes something to Cypress for 10-15-20 minutes.
What is the risk of such a decision? It has the following properties.
Simple, as I said, small. It can be repeated, that is, the same process can be forked any number of times. For example, you can do the next fork when the previous one has finished working. With this scheme, all controllers will receive their snapshots approximately at the same time as we do the next fork.
The price of such a solution is twice the memory consumption. Because in the worst case, copy-on-write, copying memory pages when one or another process touches them and we want to preserve the visibility of the old state in another process, will bifurcate the entire memory of our scheduler.This must be remembered.
Ok, we learned how to write snapshots. Let’s understand what the recovery logic will now be. It is not very difficult, but there are a number of points in it that are worth talking about.
If I have a snapshot, I can probably recover from it? There is a risk that I have read the snapshot, but I cannot recover from it, because it was recorded by an old version of the code. It’s unpleasant. In such a situation, I really cannot recover. We must strive to make such a situation rare, try to maintain the compatibility of snapshots as often as possible, not break it with minor updates.Because if there is no compatibility, then there is no way out but to do a clean start and start from scratch.
If the operation does not have any snapshots, then I also cannot do anything except clean start. But this is not very scary: it means that the operation has not been running very long to the current moment, no more than those 15-20 minutes, and this is the regularity with which snapshots appear.
What do we lose when we wake up from a snapshot? The snapshot was taken at some time in the past. We are losing some of the latest developments.Let’s understand what we know about jobs.
A job that ended before the last snapshot, we definitely won’t lose. It is wonderful. If the job ended after the last snapshot, then we definitely lost information about what kind of output it generated. We do not know how to deal with this problem yet. Let’s just ignore such jobs. We will assume that they did not exist. We will have to restart them, recalculate them.
Only completed jobs are captured in the snapshot, and I argue that this is already a pretty good solution.
In the picture it looks like this. There is a moment when I took a snapshot. By this time, some jobs have finished. I do not need to restart these jobs after the scheduler has been raised: I know everything about them. And, unfortunately, I lose jobs that did not complete during the last snapshot. And it doesn’t matter if they had time to complete by the time the planner fell or fled during his fall.
But there are not very many such jobs. It turns out that if they are rather short, then in this picture I lose a segment of about 20 minutes of real time of progress.If the operation is long, 20 minutes of progress is not very much for it, they can be replayed.
What happens if the jobs are long? This is sometimes necessary – for example, a job may have a very difficult setup for starting work, which cannot be accelerated by splitting its progress into two parts. Such an additive term during the work of the job. If there is such a job, it is long, it works for an hour or two, then, unfortunately, we will lose a lot. The picture will look like this.
Despite the fact that we regularly took snapshots every 20 minutes, we will still lose about the duration of the job – about an hour in this picture, when the switch will take place.Since I have to replay all the jobs.
Therefore, long jobs are bad, but sometimes they occur. Unfortunately, this is the reality that we have to face. For example, machine learning tasks do not parallelize very well, and periodically in YT, operations are launched, in which there are two-hour, five-hour jobs, just performing some process. YT is used as a place to run something like that. What to do?
Let’s try to recover something from a snapshot about running jobs too.
We have a number of problems. Suppose I just woke up, got up from a snapshot and I need to find out something about the jobs that are running now. I can ask the nodes to inform me that some jobs are running on them, periodically coming to me with heartbetas. This is a half-hearted solution. If an event comes to me that the job is running, great. I know everything about him, I picked him up. And if it doesn’t come, then I’m in confusion: has this job already finished? Or fell, or is still running, but this event just did not reach me? What to do?
In the picture, if there were three jobs, one of which fell, the second ended and generated some kind of exit, and the third continues to run at the moment of switching, then, waking up in a new incarnation, I learn something about the third job.He will come to me and say: I am here. But I don’t know anything about the first and second jobs, and what is most terrible, I’ll lose the conclusion that the second job managed to generate. Because I do not have it in the snapshot, it is no longer on the node either, where can I get it from?
The solution suggests itself. Let’s ask the nodes to remember the part of the state that I could potentially lose.
Let’s ask them to hold on to the event that the job has ended for as long as it makes sense.That is, they will hold on to everything that I can lose. And I, as a planner, will periodically ask them: listen, I just woke up and forgot what happened in the last 15 minutes. Please remind me of all the “job ended” events that I might have missed.
How long does it take to keep such information on a node? It is argued that we need to remember when we reliably memorize at least something. We reliably remember something when it gets into a snapshot. If the snapshot contains an event that the job has ended, then you can safely forget about this event on the node, because as a scheduler I will never forget about it.
Wonderful. Let’s introduce another call that will go from the scheduler to the node and say that you can forget about the job. We will use it to build this structure. Let’s look at the picture.
I have a job. He ran and ended. At the moment, there is no information in the snapshot that the job has ended. Therefore, the node remembers this job and keeps the event that it sent to the scheduler for some more time. Then the scheduler writes a snapshot. That’s it, this event will not disappear anywhere.And finally, the scheduler says: that’s it, you can forget about this job, we throw this job out of the node.
In a bad situation, when switching occurs, the picture will be a little more complicated. The job has ended, the switch is in progress. A new incarnation of the planner wakes up. What happens to the job from her point of view? From her point of view, he is running, because she woke up at the moment when he was still running.
The scheduler says to the node: listen, what happened to this job? She replies: yes, in fact, it is already over.The planner is okay. Noda continues to remember about this job, because you never know, what if the scheduler crashes again? In the end, the scheduler makes another snapshot, and this job is flushed out of the node.
This solution is now used by us. Let me give you the numbers that justify this approach.
We save on the order of hundreds of thousands of CPU cores on large clusters per week just because we take snapshots. And it really saves. But this is about recycling.And there is one more point that directly concerns how the user sees it all.
Back to the person running a machine learning task, which typically looks like a single job for several hours. Let’s look at such operations. Let us choose from them those that have hooked on the switching of the scheduler. I claim that there are several tens of thousands of such operations per month. And such operations save on average seven hours on switching, because we are able to pick up jobs from these operations back.
If we didn’t do this, then on average we would have to spend seven hours more on each such operation, just because the operation ran for ten hours, and at the seventh hour of life there was a switch of the scheduler.
Of course, people who sit and endure the whole day waiting for the operation to arrive are very glad that we are saving these hours.
What can you do next? There are places where this circuit could be improved. You can try to restore not only jobs that ran during the last snapshot, but also jobs that were launched after it.
This is difficult, because it requires us to try to restore, but how did the state of the scheduler change after the last snapshot. The controller is a rather complex structure. I explained it on my fingers that there are only two influences and inside it only holds a list of jobs. In fact, it is, of course, much more complicated. And to restore this post factum, knowing that my previous incarnation launched such and such a job, I wonder in what sequence of mutations she did it, is very difficult.We still do not know how, we are thinking about it.
Another way seems more promising to us: the entire state of the scheduler can be completely settled in some persistent storage. Do not take snapshots periodically, but let the state of the scheduler live in persistent storage at any time. If suddenly this scheduler falls, then another, actually stateless, scheduler takes its place, and just looking at the state with which it worked, it continues to modify it.
What can act as storage? Cypress will not be able to: it, unfortunately, is not intended for this. We have a slightly different technology – those same horizontal scalable key-value storage, which we also did not talk about. But I hope that my colleagues will also give birth to a report on this topic. I’ll show you a final picture of how this will look in the bright future.
And thank you for your attention, because that’s all. Thanks to all.
windows-10 – Failed to start Task Scheduler.Additional data: Error value: 2147943726
I am using Windows 10 Task Scheduler to run tasks that require me to use my personal user account (I must use my user and not the system user due to permission issues – I am part of an organization). Everything worked fine on Windows 7 computers, but when we upgraded to version 10, I cannot run tasks without using the System user (as mentioned earlier, it doesn’t work due to permissions).I am getting the following error
Additional Data: Error Value: 2147943726
Everything I found on the internet was advice to use the system user, except that nothing 🙁
Please save my day.
Here is a picture of the settings I want to change.
46
misha312
3 June 2017 at 11:35 pm
9 replies
Best Answer
I got the same problem today, (HRESULT) 0x8007052e (2147943726) “unknown username or wrong password”
My solution was to reassign the user using the Change User or Group button to get the most recent Active Directory user information.
Then I can do the task again …
As a best practice, you can use an “Aplicative” user instead of a regular user who changes more often.
If you are using your account, it may change your password every few days … and you will need to “fix it” again …
If you are using an “Aplicative” account, it may change less than a regular user …
This can be done by going to the General tab, then Change User or Group and assigning the Aplicative account, then OK.
54
JWBG
28 Feb 2018 at 22:20
I realized that the error was related to the password expired policy. The PC got a kind of “frozen” state for the scheduler until a new password was set. The problem is resolved as soon as the new password is accepted by the system.
It is highly recommended that you make a small change so that the scheduler will prompt for a new password and update the task.
one
Jamal
11 March 2018 at 01:42
When I selected “Run whether user is logged in or not” on Windows Server 2019, I received an error message:
An error occurred for the Import Dealer Portal data task. Error message: The following error message was reported: 2147943712
There is a Group Policy setting that stops this work, you can disable it with the following steps:
- Start> Run> secpol.msc
- Security Settings> Local Policies> Security Settings> Network Access: Do not allow storing passwords and credentials for network authentication
- Disable this
You can now save the scheduled task
0
Matt kemp
13 May 2020 at 02:44
We had the same problem cloning machines from a Windows 2012 server to a VMware ESX server.The clone / deploy script used sysprep to make each machine individual. With this, the users assigned with the scheduled task screwed up. Our solution was to re-generate the task via a batch file during machine startup:
REM Delete the task:
SCHTASKS / Delete / TN "NameOfScheduledTask" / f
REM Create a task to run every 5 minutes
SCHTASKS / Create / TN NameOfScheduledTask / SC MINUTE / MO 5 / TR "some command for task" / NP
0
Florian straub
28 May 2019 at 13:33
I had the same problem with Windows Task Scheduler.
The reason for the failure is a recent change in the system user password that was configured when the task was created.
Solution:
- Go to task properties
- On the General tab, click Change user or group …
- Enter the username in the Enter the object name to select field
- This will ask for authentication, provide your credentials
This is it !
sixteen
DJo
28 Feb 2019 at 11:07
I know this is a late answer, I had the same problem today, a scheduled task that I created a long time ago and stopped running a week ago.It turns out I changed the password for my username, which was the problem. as soon as I returned to the assignment, got the chance that I was asked to enter the password again.
one
user1620090
29 Jan 2019 at 15: 20
After reading this post, this is what worked for me. Go to task properties.On the General tab at the bottom of this window, you will see “Configure for: change” to the system you are using (in my case server 2012 R2), click “OK” and enter your password.
0
Jesse Jalbert
11 March 2019 at 16:03
You can also simply edit the properties of each task (you do not need to change anything), click OK, and you will be prompted for a new password.Conveniently, I haven’t seen a way to do all of them at once.
0
dave
18 Jan 2018 at 13:49
I had to select the Do not store password option. The task will only have access to local computer resources.