How does True Comp Duplicator work in After Effects. What are the key features of True Comp Duplicator v3.9.14. How can True Comp Duplicator improve workflow efficiency in After Effects projects.
Understanding the Basics of Composition Duplication in After Effects
Duplicating compositions in Adobe After Effects is a fundamental skill for any motion graphics artist or video editor. However, the process isn’t always as straightforward as it might seem. To truly understand how composition duplication works, we need to delve into the relationship between the timeline and the Project Panel.
In After Effects, everything in the timeline, except for shape and text layers, originates from the Project Panel. This means you cannot have any elements in your timeline unless they exist as assets in the Project Panel. When you duplicate an item in the timeline, you’re not creating a new asset in the Project Panel; you’re simply adding another instance of the same asset to your timeline.
This principle applies to all types of assets, including compositions. Duplicating a composition in the timeline doesn’t create a unique copy of that composition. Instead, it creates another reference to the same composition, similar to how duplicating footage in the timeline gives you another instance of the same footage.
Creating Unique Composition Copies
To create a truly unique copy of a composition, you must duplicate it in the Project Panel. This action creates a new, independent composition that can be modified without affecting the original or other instances of the composition in your project.
For replacing footage or nested compositions (pre-comps) in the timeline, After Effects provides a specific workflow. Select the layer you want to replace in the Timeline, hold down the Alt/Option key, select the replacement asset in the Project Panel, and drag it to the timeline. This method ensures that you’re working with the correct assets and maintaining the integrity of your project structure.
Introducing True Comp Duplicator: A Game-Changing After Effects Script
While the manual process of duplicating compositions can be time-consuming and prone to errors, especially in complex projects, the True Comp Duplicator script offers a powerful solution. This script, developed by aescripts, streamlines the composition duplication process, making it faster and more efficient.
Key Features of True Comp Duplicator v3.9.14
- Complete duplication of comp hierarchies, including sub-comps
- Intelligent handling of multiple comp instances
- Preservation of folder hierarchies in the Project Panel
- Updated graphical user interface for improved usability
- Expression updating to maintain functionality in duplicated comps
- Option for creating multiple copies simultaneously
- Depth limit control for managing duplication scope
- Ability to duplicate associated footage items
- Include/Exclude filters with regex support for precise control
- Improved naming conventions for duplicated elements
- Built-in help system for user guidance
- Compatibility with After Effects CS6 and later versions
How True Comp Duplicator Enhances Workflow Efficiency
True Comp Duplicator significantly improves workflow efficiency in After Effects projects by automating and streamlining the composition duplication process. But how exactly does it achieve this?
Automated Hierarchy Duplication
One of the most powerful features of True Comp Duplicator is its ability to duplicate entire composition hierarchies, including all sub-compositions. This automation saves countless hours that would otherwise be spent manually recreating complex project structures.
When duplicating a composition that’s used multiple times within a project, True Comp Duplicator intelligently creates only one duplicate. All remaining references are then updated to point to this new duplicate, maintaining project integrity while reducing redundancy.
Preserving Project Organization
Maintaining a clean and organized project panel is crucial for efficient workflow, especially in large-scale projects. True Comp Duplicator respects existing folder hierarchies, either preserving them or duplicating them based on user preference. This feature ensures that your project remains organized even after extensive duplication operations.
Advanced Features for Precision Control
True Comp Duplicator v3.9.14 introduces several advanced features that provide users with unprecedented control over the duplication process.
Depth Limit and Multiple Copies
The depth limit feature allows users to control how deep into the composition hierarchy the duplication process should go. This is particularly useful for complex projects where you may only need to duplicate certain levels of nested compositions.
Additionally, the ability to create multiple copies simultaneously can be a significant time-saver when working on projects that require numerous variations of a composition.
Include/Exclude Filters
The include/exclude filter feature, now with regex support, provides precise control over which elements are duplicated. This allows for highly targeted duplication operations, ensuring that only the necessary components are replicated.
Expression Updating and Footage Duplication
Expressions play a crucial role in many After Effects projects, automating animations and creating dynamic relationships between elements. How does True Comp Duplicator handle expressions during the duplication process?
True Comp Duplicator includes an intelligent expression updating mechanism. This feature ensures that expressions in duplicated compositions continue to function correctly, maintaining the intended behavior of your animations and effects.
Furthermore, the script can duplicate footage items along with compositions. This can be particularly useful when you need to create variations of a composition that require unique footage instances.
User Interface and Ease of Use
The effectiveness of any tool is greatly influenced by its usability. How does True Comp Duplicator fare in terms of user interface and ease of use?
Version 3.9.14 of True Comp Duplicator features an updated graphical user interface designed for improved usability. The interface provides clear options for controlling the duplication process, making it accessible even to users who are new to the script.
Additionally, the script includes a built-in help system. This feature provides instant access to guidance and explanations, reducing the learning curve and helping users make the most of the script’s capabilities.
Compatibility and Integration
Software compatibility is a crucial factor when considering any After Effects script or plugin. What versions of After Effects support True Comp Duplicator?
True Comp Duplicator v3.9.14 is compatible with a wide range of After Effects versions, from CS6 to the latest Creative Cloud releases. This broad compatibility ensures that the script can be integrated into workflows regardless of the specific After Effects version being used.
The script seamlessly integrates into the After Effects environment, allowing users to access its functionality without disrupting their established workflows. This integration extends to respecting and working with existing project structures and naming conventions.
Real-World Applications and Benefits
Understanding the features of True Comp Duplicator is one thing, but how does it translate to real-world benefits for After Effects users?
Time Savings
The most immediate benefit of using True Comp Duplicator is the significant time savings it offers. Tasks that would typically take hours to complete manually can be accomplished in minutes using the script. This efficiency allows artists and editors to focus more on creative aspects of their work rather than repetitive technical tasks.
Error Reduction
Manual duplication of complex composition hierarchies is prone to errors. A missed nested composition or an overlooked expression can lead to issues that are time-consuming to troubleshoot. True Comp Duplicator’s automated process drastically reduces the likelihood of such errors, ensuring more reliable and consistent results.
Workflow Flexibility
The script’s advanced features, such as depth control and include/exclude filters, provide users with greater flexibility in their workflows. This allows for more nuanced approaches to project organization and asset management, adapting to the specific needs of different projects or production pipelines.
Improved Project Scalability
For projects that require multiple variations or iterations of compositions, True Comp Duplicator makes scaling up significantly easier. The ability to quickly create and manage multiple copies of complex composition structures allows for more efficient exploration of creative options and faster response to client revisions.
In conclusion, True Comp Duplicator v3.9.14 represents a significant advancement in composition management for After Effects users. By automating and streamlining the duplication process, it addresses many of the challenges associated with manual composition handling. Whether you’re working on small projects or large-scale productions, this script offers valuable tools for improving efficiency, reducing errors, and enhancing overall workflow flexibility. As with any tool, the key to maximizing its benefits lies in understanding its capabilities and integrating it effectively into your specific working methods.
Solved: Re: Duplicate a pre-comp and make it independent – Adobe Support Community
It is really very simple. Everything in the timeline except shape and text layers comes from the Project Panel. You cannot have anything but text layers and shape layers in a timeline unless that asset exists in the Project Panel. Duplicating something in the timeline does not duplicate that resource in the Project Panel, it just creates another copy of the asset in the timeline. If the asset in the timeline is a composition then duplicating the asset in the timeline just gives you another copy of the same asset in exactly the same way duplicating footage in the timeline gives you a copy of that footage. If you need a unique copy of a composition you have to duplicate it in the Project Panel. If you need different footage in your timeline you have to drag it into the timeline from the Project Panel using the replace footage workflow. The same technique is used for replacing a nested comp (pre-comp) in the timeline. You select the layer you want to replace in the Timeline, hold down the Alt/Option key and select the replacement asset in the Project Panel and drag it to the timeline.
Duplicating something in the timeline does not duplicate that resource in the Project Panel, it just creates another copy of the asset in the timeline. If the asset in the timeline is a composition then duplicating the asset in the timeline just gives you another copy of the same asset in exactly the same way duplicating footage in the timeline gives you a copy of that footage. If you need a unique copy of a composition you have to duplicate it in the Project Panel. If you need different footage in your timeline you have to drag it into the timeline from the Project Panel using the replace footage workflow. The same technique is used for replacing a nested comp (pre-comp) in the timeline. You select the layer you want to replace in the Timeline, hold down the Alt/Option key and select the replacement asset in the Project Panel and drag it to the timeline.
If you need a unique copy of a composition you have to duplicate it in the Project Panel. It is exactly the same f you need different footage in your timeline you have to drag it into the timeline from the Project Panel using the replace footage workflow. The same technique is used for replacing a nested comp (pre-comp) in the timeline. You select the layer you want to replace in the Timeline, hold down the Alt/Option key and select the replacement asset in the Project Panel and drag it to the timeline.
I hope this clears things up a bit.
Aescripts True Comp Duplicator V3.9.14 for After Effects (WIN+MAC)
Note: We don’t own and resell this product, we got this from a free source. Developers/creator/maker made it with difficulty. If you really appreciate them then please buy from them. All the content is for demonstration purpose only, we do not store the files and after reviewing you this course/products/packs we request you to buy a genuine version.
Download True Comp Duplicator V3.9.14 – Free Script Download After Effects
True Comp Duplicator V3.9.14 Free Download – After Effects Script
After Effects | 2020, CC 2019, CC 2018, CC 2017, CC 2015.3, CC 2015, CC 2014, CC, CS6 |
---|
Aescripts True Comp Duplicator details can be found below by pressing Buy Here or View Demo Button. VFXDownload.Net Is a free Graphics or VFX Content Provider Website Which Helps Beginner Graphics Designers like Free-Lancers who need some stuff like Major Categories Motion Graphics Elements, Transitions, Photoshop Plugins, Illustrator Plugins, Creative market, video hive, Graphicriver, Stock Footages, After Effects Template, After Effects Script, Premiere Pro Template, Sounds Effects, Free Luts, Free Courses, Premiere Pro Scripts, free software, etc in vfxdownload.net.
Creates a complete duplicate of a comp hierarchy including sub-comps. If a comp is used multiple times, the comp only gets duplicated once and all remaining references point to the first duplicate. If the comps are arranged in a special folder hierarchy in the project panel, that folder hierarchy is preserved or duplicated (depending on user preference) for the duplicated comps.
New in version 3:
- Updated GUI
- Updates expressions
- Multiple copies
- Depth limit
- Duplicates Footage Items
- Include/Exclude Filter (now with Regex option)
- Duplicate multiple comps at once
- Improved naming
- Built-in Help
- CS6+ Compatible
True Comp Duplicator V3.9.14 Free Download (17 MB) (WIN+MAC)
HOW TO DOWNLOAD
BUY HERE
TRUE COMP SF 4.
0 HD Shaft Review
A majority of the carbon fiber shafts that I have used in the past, although great for field lacrosse, I would not quite trust to take the abuse and punishment that comes from box lacrosse. When I tried the TRUE COMP SF 4.0 HD, I was definitely impressed by the durability of the shaft for the box game.
The COMP SF 4.0 HD is the heavy duty version of TRUE’s best selling carbon fiber shaft. The 4.0 HD is available in a box length of 32 inches (you could cut it down like I did to the standard 30 inches for field or if you like to have a standard length shaft for box) as well as a 60 inch d-pole. With much thicker sidewalls and a bit more heft and weight, the HD is meant to take the abuse of cross checks and slashes in box lacrosse. I used the shaft in several box games and have had zero issues with the shaft.
Although the COMP SF 4.0 HD is heavier than the standard 4.0 as well as most other carbon fiber shafts, it is still pretty light compared to other box specific shafts and the weight still feels quite good in your hand. I don’t mind a bit heavier of a shaft as it makes the stick feel more balanced so I liked the feel of the HD with a head on it. The shape of the shaft is a slightly concave shaft which along with the sandpaper coating gives a good grip on the stick.
The TRUE COMP SF 4.0 HD has a Flex 5, which is pretty much the standard on most carbon shafts today. This gives you a nice extra snap from the flex on shots while still staying stiff on checks and passes.
The HD version of the COMP SF 4.0 is only available in black with white text, looking super clean. The black and white look matches great with any color head.
The COMP SF 4.0 HD comes with a good six month warranty.
Overall, I was really impressed with the COMP SF 4.0 HD shaft. This shaft gives a ton of durability while still having the benefits of a carbon fiber shaft and a good amount of flex. I’d recommend it to anyone who wants a good carbon fiber shaft that could take a lot of abuse, especially in the game of box lacrosse.
Script of the Week: True Comp Duplicator by Mark Christiansen
Note: this is the fourth in a series featuring one After Effects script a week, now appearing at the beginning of each week. For an overview on scripts, check out the debut post.
An astonishing amount of the work that gets done in After Effects is theme and repetition work. You create something, and then there is the need to create 1, 2, 10 or 147 more of them, and for each to be similar yet unique. This generalization clearly applies to motion graphics work, in which pattern forms are part of the deal, but it applies equally well to a visual effects scene with, say, a crowd or a bunch of green screens taken with the same setup.
You can duplicate a comp and re-use it, no problem. But if you’re doing your job correctly in After Effects, that one comp may very well not contain all of your work, but is likely to contain sub-comps that contain all of the detail you’ve put into individual elements. These sub-comps often go three or four layers deep, but when you duplicate the master comp, only that one is duplicated; if you also want the sub-comps to be unique – which, more often than not, you do – you need to do that by hand.
And any time you think that thought when working in After Effects, “I guess I need to do this by hand,” try training yourself to think “I must find a script that does this,” and thank me later. Here is a classic example of a workflow problem, plain and simple, that because of how it is implemented in After Effects, can lead not only to painstaking effort but also careless errors (particularly if you loathe repetitive tasks as much as I do, in which case careless errors are a particular Achilles heel).
In Nuke, when you select a set of connected nodes and copy/paste them, all of the components in the new branch are unique (although the file path to any source footage is also copied over, which is easily repleaced). True Comp Duplicator recreates this behavior in After Effects, treating the elements of a comp like the nodes on that Nuke tree, and does it one better by allowing you to choose how the new names are formed.
Sounds trivial, right? Indispensible is more like it. And if you are clever about naming your files, the result can even automatically increment the duplicates in a way that makes logical sense. The UI for this script (when installed into the ScriptUI Panels subfolder, see the first post in this series for details) lets you specify where in the name to increment and even allows you to replace one text string from the source comp with another, which is as good as a custom script for any nodal compositing app.
This is the first script featured in this series that uses the “Name Your Own Price” scheme on aescripts.com; it is shareware, as are the majority of After Effects scripts, despite the recent trends to serialize the most valuable among them. This means you’re not prevented from grabbing it if you’re in a facility where a purchase order would be required to actually buy it on a deadline, and you’re free to kick a few bucks to the developer at any time to encourage more of these great workflow enhancements to be devised and shared.
iPad – Compare Models – Apple
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Use your voice to send messages, set reminders, and more
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Activate with only your voice using “Hey Siri”
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
Listen and identify songs
H-Comp Hybrid Compressor Plugin | Waves
[ { “DocumentName” : “Diamond”, “SKU_MSRP” : 2999. 000000000, “SKUPrice” : 499.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 299.99, “Discount” : 90, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/diamond.png”, “DocumentUrlPath” : “/bundles/diamond”, “SKUNumber” : “USW379-1362-555”, “SKUID” : 244, “ReviewsTotal” : 199, “Rating” : 4.80402, “Category” : “”, “BadgeText” : “Top Seller”, “BadgeClass” : “badge badge-best-seller”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Premium Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20016, “DocumentPageDescription” : “76+audio+mixing+and+mastering+plugins%2c+from+dynamics%2c+EQ+and+reverb+to+pitch+correction%2c+spatial+imaging+and+beyond.+A+must-have+for+every+serious+studio.+”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Essentials”, “SKU_MSRP” : 1699. 000000000, “SKUPrice” : 829.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 497.99, “Discount” : 71, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/essentials.png”, “DocumentUrlPath” : “/bundles/essentials”, “SKUNumber” : “USW379-1362-660”, “SKUID” : 304, “ReviewsTotal” : 0, “Rating” : 0, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Live”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20005, “DocumentPageDescription” : “Over+30+SoundGrid-compatible+plugins+for+live+sound%2c+including+EQ%2c+compression%2c+reverb%2c+delay%2c+vocal+tools%2c+bass+extension%2c+limiting%2c+level+maximization+and+more.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Gold”, “SKU_MSRP” : 799.000000000, “SKUPrice” : 299.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 179. 99, “Discount” : 77, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/gold.png”, “DocumentUrlPath” : “/bundles/gold”, “SKUNumber” : “USW379-1362-501”, “SKUID” : 251, “ReviewsTotal” : 880, “Rating” : 4.80795, “Category” : “”, “BadgeText” : “Top Seller”, “BadgeClass” : “badge badge-best-seller”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Home Studio Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20015, “DocumentPageDescription” : “The+industry%e2%80%99s+most+popular+plugin+bundle%2c+with+over+40+essential+audio+plugins+that+bring+a+wealth+of+mixing+and+mastering+power+to+your+studio.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Gold Plus Subscription Plan”, “SKU_MSRP” : 0.000000000, “SKUPrice” : 9.990000000, “SKUDepartmentID” : 12, “CouponPrice” : 0. 00, “Discount” : 0, “CouponCode” : “”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/gold-plus.png”, “DocumentUrlPath” : “/subscriptions/gold-plus”, “SKUNumber” : “GOLDPLUSSBSFAKE14”, “SKUID” : 917, “ReviewsTotal” : 0, “Rating” : 0, “Category” : “”, “BadgeText” : “New”, “BadgeClass” : “badge badge-new”, “CouponCampaignID” : 0, “SaleEndDate” : “”, “GSFCategory” : “”, “Note” : “”, “SKUEnabled” : “False”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20016, “DocumentPageDescription” : “Give+your+music+a+fully+pro+sound+with+44+plugins+powering+the+world%e2%80%99s+top+producers+and+mixers.+1st+month+free%2c+cancel+anytime%2c+free+plugin+updates%2c+exclusive+bonus+plugins+included.”, “VariantsMinimalPrice” : 9.990000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “True” }, { “DocumentName” : “Horizon”, “SKU_MSRP” : 3999.000000000, “SKUPrice” : 549.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 329.99, “Discount” : 92, “CouponCode” : “PRODUCE40”, “Icon” : “https://img. wavescdn.com/1lib/images/products/bundles/icons/horizon.png”, “DocumentUrlPath” : “/bundles/horizon”, “SKUNumber” : “USW379-1362-659”, “SKUID” : 260, “ReviewsTotal” : 325, “Rating” : 4.80615, “Category” : “”, “BadgeText” : “Top Seller”, “BadgeClass” : “badge badge-best-seller”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Premium Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20022, “DocumentPageDescription” : “83+audio+plugins+for+music+production+professionals%2c+featuring+precise+models+of+vintage+EQs+and+compressors%2c+effects%2c+mastering+tools+and+more.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “H-Series”, “SKU_MSRP” : 599.000000000, “SKUPrice” : 129.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 77.99, “Discount” : 87, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/h-series. png”, “DocumentUrlPath” : “/bundles/h-series”, “SKUNumber” : “HSRIES”, “SKUID” : 567, “ReviewsTotal” : 125, “Rating” : 4.848, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Premium Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20001, “DocumentPageDescription” : “Waves%e2%80%99+Hybrid+plugins+model+the+original+behavior+of+diverse+analog+gear+and+boost+it+with+the+precision+and+limitless+creative+possibilities+of+digital+control.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Live”, “SKU_MSRP” : 2699.000000000, “SKUPrice” : 2249.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 1349.99, “Discount” : 50, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/live-bundle.png”, “DocumentUrlPath” : “/bundles/live”, “SKUNumber” : “USW379-1362-617”, “SKUID” : 301, “ReviewsTotal” : 0, “Rating” : 0, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Live”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20015, “DocumentPageDescription” : “50+live+sound+plugins+that+will+help+you+recreate+the+excitement+of+studio+mixes+%e2%80%93+live+on+stage. +Includes+precision+tools+for+EQ%2c+reverb%2c+dynamics+and+more.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Mercury”, “SKU_MSRP” : 7599.000000000, “SKUPrice” : 3329.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 1997.99, “Discount” : 74, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/mercury.png”, “DocumentUrlPath” : “/bundles/mercury”, “SKUNumber” : “USW379-1362-612”, “SKUID” : 273, “ReviewsTotal” : 69, “Rating” : 4.81159, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Premium Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20053, “DocumentPageDescription” : “With+180+stellar+plugins+and+more+than+400+components%2c+the+Mercury+bundle+features+more+Waves+mixing+tools+in+one+package+than+ever+before!”, “VariantsMinimalPrice” : 0. 000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Platinum”, “SKU_MSRP” : 1999.000000000, “SKUPrice” : 449.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 269.99, “Discount” : 86, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/platinum.png”, “DocumentUrlPath” : “/bundles/platinum”, “SKUNumber” : “USW379-1362-514”, “SKUID” : 280, “ReviewsTotal” : 223, “Rating” : 4.75785, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Premium Bundles”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20015, “DocumentPageDescription” : “A+collection+of+60+audio+plugins+covering+dynamics%2c+EQ%2c+reverb%2c+delay%2c+pitch+correction+and+beyond.+Ideal+for+mixing%2c+mastering%2c+sound+design+and+more.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Platinum Plus Subscription Plan”, “SKU_MSRP” : 19. 990000000, “SKUPrice” : 19.990000000, “SKUDepartmentID” : 12, “CouponPrice” : 0.00, “Discount” : 0, “CouponCode” : “”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/platinum-plus.png”, “DocumentUrlPath” : “/subscriptions/platinum-plus”, “SKUNumber” : “PLATINUMPLUSSBSFAKE14”, “SKUID” : 920, “ReviewsTotal” : 0, “Rating” : 0, “Category” : “”, “BadgeText” : “New”, “BadgeClass” : “badge badge-new”, “CouponCampaignID” : 0, “SaleEndDate” : “”, “GSFCategory” : “”, “Note” : “”, “SKUEnabled” : “False”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20016, “DocumentPageDescription” : “Produce%2c+mix+and+master+professionally+with+64+plugins+powering+the+world%e2%80%99s+top+producers.+1st+month+free%2c+cancel+anytime%2c+free+plugin+updates%2c+exclusive+bonus+plugins+included.”, “VariantsMinimalPrice” : 19.990000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “True” }, { “DocumentName” : “Pro Show”, “SKU_MSRP” : 8000.000000000, “SKUPrice” : 4000. 000000000, “SKUDepartmentID” : 1, “CouponPrice” : 0.00, “Discount” : 50, “CouponCode” : “”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/pro-show.png”, “DocumentUrlPath” : “/bundles/pro-show”, “SKUNumber” : “USW379-1362-661”, “SKUID” : 305, “ReviewsTotal” : 0, “Rating” : 0, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 0, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Live”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20048, “DocumentPageDescription” : “Over+120+SoundGrid-compatible+plugins%2c+including+the+Essentials+bundle%2c+the+SSL+4000+and+API+Collections%2c+CLA+Classic+Compressors%2c+and+JJP+Analog+Legends.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “SD7 Pro Show”, “SKU_MSRP” : 12150.000000000, “SKUPrice” : 6000.000000000, “SKUDepartmentID” : 1, “CouponPrice” : 0.00, “Discount” : 51, “CouponCode” : “”, “Icon” : “https://img. wavescdn.com/1lib/images/products/bundles/icons/sd7-pro-show.png”, “DocumentUrlPath” : “/bundles/sd7-pro-show”, “SKUNumber” : “SGSDPRO”, “SKUID” : 306, “ReviewsTotal” : 4, “Rating” : 5, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 0, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Live”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20046, “DocumentPageDescription” : “Featuring+dual+licenses+for+dual+engine+mirroring+support%2c+SD7+Pro+Show+includes+over+120+SoundGrid-compatible+plugins+for+live+mixing+with+DiGiCo+SD7+consoles.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” }, { “DocumentName” : “Sound Design Suite”, “SKU_MSRP” : 749.000000000, “SKUPrice” : 379.990000000, “SKUDepartmentID” : 1, “CouponPrice” : 227.99, “Discount” : 70, “CouponCode” : “PRODUCE40”, “Icon” : “https://img.wavescdn.com/1lib/images/products/bundles/icons/sound-design-suite. png”, “DocumentUrlPath” : “/bundles/sound-design-suite”, “SKUNumber” : “SDTDM”, “SKUID” : 289, “ReviewsTotal” : 45, “Rating” : 4.84444, “Category” : “”, “BadgeText” : “”, “BadgeClass” : “”, “CouponCampaignID” : 441, “SaleEndDate” : “5/27/2021 5:00:00 AM”, “GSFCategory” : “Post Production”, “Note” : “”, “SKUEnabled” : “True”, “IsInventory” : “”, “IsPreorder” : “False”, “ExpectedShipping” : “”, “MainOrder” : 20010, “DocumentPageDescription” : “Waves+Sound+Design+Suite+is+a+comprehensive+collection+of+over+35+plugins+hand-picked+especially+for+sound+designers+and+post+production+facilities.”, “VariantsMinimalPrice” : 0.000000000, “IsVariantsPriceSame” : “False”, “IsProductOptions” : “False” } ]
Compose file version 3 reference
Estimated reading time: 78 minutes
Reference and guidelines
These topics describe version 3 of the Compose file format. This is the newest
version.
Compose and Docker compatibility matrix
There are several versions of the Compose file format – 1, 2, 2. x, and 3.x. The
table below is a quick look. For full details on what each version includes and
how to upgrade, see About versions and upgrading.
This table shows which Compose file versions support specific Docker releases.
Compose file format | Docker Engine release |
---|---|
Compose specification | 19.03.0+ |
3.8 | 19.03.0+ |
3.7 | 18.06.0+ |
3.6 | 18.02.0+ |
3.5 | 17.12.0+ |
3.4 | 17.09.0+ |
3.3 | 17.06.0+ |
3.2 | 17.04.0+ |
3.1 | 1.13.1+ |
3.0 | 1.13.0+ |
2.4 | 17.12.0+ |
2.3 | 17.06.0+ |
2. 2 | 1.13.0+ |
2.1 | 1.12.0+ |
2.0 | 1.10.0+ |
In addition to Compose file format versions shown in the table, the Compose
itself is on a release schedule, as shown in Compose
releases, but file format versions
do not necessarily increment with each release. For example, Compose file format
3.0 was first introduced in Compose release
1.10.0, and versioned
gradually in subsequent releases.
The latest Compose file format is defined by the Compose Specification and is implemented by Docker Compose 1.27.0+.
Compose file structure and examples
Here is a sample Compose file from the voting app sample used in the
Docker for Beginners lab
topic on Deploying an app to a Swarm:
Example Compose file version 3
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9. 4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
max_replicas_per_node: 1
constraints:
- "node.role==manager"
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- "5000:80"
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- "5001:80"
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints:
- "node. role==manager"
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints:
- "node.role==manager"
networks:
frontend:
backend:
volumes:
db-data:
The topics on this reference page are organized alphabetically by top-level key
to reflect the structure of the Compose file itself. Top-level keys that define
a section in the configuration file such as build
, deploy
, depends_on
,
networks
, and so on, are listed with the options that support them as
sub-topics. This maps to the <key>: <option>: <value>
indent structure of the
Compose file.
Service configuration reference
The Compose file is a YAML file defining
services,
networks and
volumes.
The default path for a Compose file is . /docker-compose.yml
.
Tip: You can use either a
.yml
or.yaml
extension for this file.
They both work.
A service definition contains configuration that is applied to each
container started for that service, much like passing command-line parameters to
docker run
. Likewise, network and volume definitions are analogous to
docker network create
and docker volume create
.
As with docker run
, options specified in the Dockerfile, such as CMD
,
EXPOSE
, VOLUME
, ENV
, are respected by default – you don’t need to
specify them again in docker-compose.yml
.
You can use environment variables in configuration values with a Bash-like
${VARIABLE}
syntax – see variable substitution for
full details.
This section contains a list of all configuration options supported by a service
definition in version 3.
build
Configuration options that are applied at build time.
build
can be specified either as a string containing a path to the build
context:
version: "3.9"
services:
webapp:
build: ./dir
Or, as an object with the path specified under context and
optionally Dockerfile and args:
version: "3.9"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
If you specify image
as well as build
, then Compose names the built image
with the webapp
and optional tag
specified in image
:
build: ./dir
image: webapp:tag
This results in an image named webapp
and tagged tag
, built from ./dir
.
Note when using docker stack deploy
The
build
option is ignored when
deploying a stack in swarm mode
Thedocker stack
command does not build images before deploying.
context
Either a path to a directory containing a Dockerfile, or a url to a git repository.
When the value supplied is a relative path, it is interpreted as relative to the
location of the Compose file. This directory is also the build context that is
sent to the Docker daemon.
Compose builds and tags it with a generated name, and uses that image
thereafter.
dockerfile
Alternate Dockerfile.
Compose uses an alternate file to build with. A build path must also be
specified.
build:
context: .
dockerfile: Dockerfile-alternate
args
Add build arguments, which are environment variables accessible only during the
build process.
First, specify the arguments in your Dockerfile:
# syntax=docker/dockerfile:1
ARG buildno
ARG gitcommithash
RUN echo "Build number: $buildno"
RUN echo "Based on commit: $gitcommithash"
Then specify the arguments under the build
key. You can pass a mapping
or a list:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19
Scope of build-args
In your Dockerfile, if you specify
ARG
before theFROM
instruction,
ARG
is not available in the build instructions underFROM
.
If you need an argument to be available in both places, also specify it under
theFROM
instruction. Refer to the understand how ARGS and FROM interact
section in the documentation for usage details.
You can omit the value when specifying a build argument, in which case its value
at build time is the value in the environment where Compose is running.
args:
- buildno
- gitcommithash
Tip when using boolean values
YAML boolean values (
"true"
,"false"
,"yes"
,"no"
,"on"
,
"off"
) must be enclosed in quotes, so that the parser interprets them as
strings.
cache_from
Added in version 3.2 file format
A list of images that the engine uses for cache resolution.
build:
context: .
cache_from:
- alpine:latest
- corp/web_app:3.14
labels
Added in version 3.3 file format
Add metadata to the resulting image using Docker labels.
You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from
conflicting with those used by other software.
build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
network
Added in version 3. 4 file format
Set the network containers connect to for the RUN
instructions during
build.
build:
context: .
network: host
build:
context: .
network: custom_network_1
Use none
to disable networking during build:
build:
context: .
network: none
shm_size
Added in version 3.5 file format
Set the size of the /dev/shm
partition for this build’s containers. Specify
as an integer value representing the number of bytes or as a string expressing
a byte value.
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000
target
Added in version 3.4 file format
Build the specified stage as defined inside the Dockerfile
. See the
multi-stage build docs for
details.
build:
context: .
target: prod
cap_add, cap_drop
Add or drop container capabilities.
See man 7 capabilities
for a full list.
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
Note when using docker stack deploy
The
cap_add
andcap_drop
options are ignored when
deploying a stack in swarm mode
cgroup_parent
Specify an optional parent cgroup for the container.
cgroup_parent: m-executor-abcd
Note when using docker stack deploy
The
cgroup_parent
option is ignored when
deploying a stack in swarm mode
command
Override the default command.
command: bundle exec thin -p 3000
The command can also be a list, in a manner similar to
dockerfile:
command: ["bundle", "exec", "thin", "-p", "3000"]
configs
Grant access to configs on a per-service basis using the per-service configs
configuration. Two different syntax variants are supported.
Note: The config must already exist or be
defined in the top-levelconfigs
configuration
of this stack file, or stack deployment fails.
For more information on configs, see configs.
Short syntax
The short syntax variant only specifies the config name. This grants the
container access to the config and mounts it at /<config_name>
within the container. The source name and destination mountpoint are both set
to the config name.
The following example uses the short syntax to grant the redis
service
access to the my_config
and my_other_config
configs. The value of
my_config
is set to the contents of the file ./my_config.txt
, and
my_other_config
is defined as an external resource, which means that it has
already been defined in Docker, either by running the docker config create
command or by another stack deployment. If the external config does not exist,
the stack deployment fails with a config not found
error.
Added in version 3.3 file format.
config
definitions are only supported in version 3.3 and higher of the
compose file format.
version: "3.9"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- my_config
- my_other_config
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true
Long syntax
The long syntax provides more granularity in how the config is created within
the service’s task containers.
source
: The identifier of the config as it is defined in this configuration.target
: The path and name of the file to be mounted in the service’s
task containers. Defaults to/<source>
if not specified.uid
andgid
: The numeric UID or GID that owns the mounted config file
within in the service’s task containers. Both default to0
on Linux if not
specified. Not supported on Windows.mode
: The permissions for the file that is mounted within the service’s
task containers, in octal notation. For instance,0444
represents world-readable. The default is0444
. Configs cannot be writable
because they are mounted in a temporary filesystem, so if you set the writable
bit, it is ignored. The executable bit can be set. If you aren’t familiar with
UNIX file permission modes, you may find this
permissions calculator
useful.
The following example sets the name of my_config
to redis_config
within the
container, sets the mode to 0440
(group-readable) and sets the user and group
to 103
. The redis
service does not have access to the my_other_config
config.
version: "3.9"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- source: my_config
target: /redis_config
uid: '103'
gid: '103'
mode: 0440
configs:
my_config:
file: . /my_config.txt
my_other_config:
external: true
You can grant a service access to multiple configs and you can mix long and
short syntax. Defining a config does not imply granting a service access to it.
container_name
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond
1 container if you have specified a custom name. Attempting to do so results in
an error.
Note when using docker stack deploy
The
container_name
option is ignored when
deploying a stack in swarm mode
credential_spec
Added in version 3.3 file format.
The
credential_spec
option was added in v3.3. Using group Managed Service
Account (gMSA) configurations with compose files is supported in file format
version 3. 8 or up.
Configure the credential spec for managed service account. This option is only
used for services using Windows containers. The credential_spec
must be in the
format file://<filename>
or registry://<value-name>
.
When using file:
, the referenced file must be present in the CredentialSpecs
subdirectory in the Docker data directory, which defaults to C:\ProgramData\Docker\
on Windows. The following example loads the credential spec from a file named
C:\ProgramData\Docker\CredentialSpecs\my-credential-spec.json
.
credential_spec:
file: my-credential-spec.json
When using registry:
, the credential spec is read from the Windows registry on
the daemon’s host. A registry value with the given name must be located in:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs
The following example load the credential spec from a value named my-credential-spec
in the registry:
credential_spec:
registry: my-credential-spec
Example gMSA configuration
When configuring a gMSA credential spec for a service, you only need
to specify a credential spec with config
, as shown in the following example:
version: "3. 9"
services:
myservice:
image: myimage:latest
credential_spec:
config: my_credential_spec
configs:
my_credentials_spec:
file: ./my-credential-spec.json|
depends_on
Express dependency between services. Service dependencies cause the following
behaviors:
docker-compose up
starts services in dependency order. In the following
example,db
andredis
are started beforeweb
.docker-compose up SERVICE
automatically includesSERVICE
’s
dependencies. In the example below,docker-compose up web
also
creates and startsdb
andredis
.docker-compose stop
stops services in dependency order. In the following
example,web
is stopped beforedb
andredis
.
Simple example:
version: "3.9"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
There are several things to be aware of when using
depends_on
:
depends_on
does not wait fordb
andredis
to be “ready” before
startingweb
– only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order
for more on this problem and strategies for solving it.- The
depends_on
option is ignored when
deploying a stack in swarm mode
with a version 3 Compose file.
deploy
Added in version 3 file format.
Specify configuration related to the deployment and running of services. This
only takes effect when deploying to a swarm with
docker stack deploy, and is
ignored by docker-compose up
and docker-compose run
.
version: "3.9"
services:
redis:
image: redis:alpine
deploy:
replicas: 6
placement:
max_replicas_per_node: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
Several sub-options are available:
endpoint_mode
Added in version 3.2 file format.
Specify a service discovery method for external clients connecting to a swarm.
endpoint_mode: vip
– Docker assigns the service a virtual IP (VIP)
that acts as the front end for clients to reach the service on a
network. Docker routes requests between the client and available worker
nodes for the service, without client knowledge of how many nodes
are participating in the service or their IP addresses or ports.
(This is the default.)endpoint_mode: dnsrr
– DNS round-robin (DNSRR) service discovery does
not use a single virtual IP. Docker sets up DNS entries for the service
such that a DNS query for the service name returns a list of IP addresses,
and the client connects directly to one of these. DNS round-robin is useful
in cases where you want to use your own load balancer, or for Hybrid
Windows and Linux applications.
version: "3.9"
services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
networks:
overlay:
The options for endpoint_mode
also work as flags on the swarm mode CLI command
docker service create. For a
quick list of all swarm related docker
commands, see
Swarm mode CLI commands.
To learn more about service discovery and networking in swarm mode, see
Configure service discovery
in the swarm mode topics.
labels
Specify labels for the service. These labels are only set on the service,
and not on any containers for the service.
version: "3.9"
services:
web:
image: web
deploy:
labels:
com.example.description: "This label will appear on the web service"
To set labels on containers instead, use the labels
key outside of deploy
:
version: "3.9"
services:
web:
image: web
labels:
com.example.description: "This label will appear on all containers for the web service"
mode
Either global
(exactly one container per swarm node) or replicated
(a
specified number of containers). The default is replicated
. (To learn more,
see Replicated and global services
in the swarm topics.)
version: "3.9"
services:
worker:
image: dockersamples/examplevotingapp_worker
deploy:
mode: global
placement
Specify placement of constraints and preferences. See the docker service create
documentation for a full description of the syntax and available types of
constraints,
preferences,
and specifying the maximum replicas per node
version: "3.9"
services:
db:
image: postgres
deploy:
placement:
constraints:
- "node.role==manager"
- "engine.labels.operatingsystem==ubuntu 18.04"
preferences:
- spread: node.labels.zone
max_replicas_per_node
Added in version 3.8 file format.
If the service is replicated
(which is the default), limit the number of replicas
that can run on a node at any time.
When there are more tasks requested than running nodes, an error
no suitable node (max replicas per node limit exceed)
is raised.
version: "3.9"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
placement:
max_replicas_per_node: 1
replicas
If the service is replicated
(which is the default), specify the number of
containers that should be running at any given time.
version: "3.9"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
resources
Configures resource constraints.
Changed in compose-file version 3
The
resources
section replaces the older resource constraint options
in Compose files prior to version 3 (cpu_shares
,cpu_quota
,cpuset
,
mem_limit
,memswap_limit
,mem_swappiness
).
Refer to Upgrading version 2.x to 3.x
to learn about differences between version 2 and 3 of the compose-file format.
Each of these is a single value, analogous to its
docker service create counterpart.
In this general example, the redis
service is constrained to use no more than
50M of memory and 0.50
(50% of a single core) of available processing time (CPU),
and has 20M
of memory and 0.25
CPU time reserved (as always available to it).
version: "3.9"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
The topics below describe available options to set resource constraints on
services or containers in a swarm.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the
deploy
key and swarm mode. If you want to set resource constraints
on non swarm deployments, use
Compose file format version 2 CPU, memory, and other resource options.
If you have further questions, refer to the discussion on the GitHub
issue docker/compose/4513.
Out Of Memory Exceptions (OOME)
If your services or containers attempt to use more memory than the system has
available, you may experience an Out Of Memory Exception (OOME) and a container,
or the Docker daemon, might be killed by the kernel OOM killer. To prevent this
from happening, ensure that your application runs on hosts with adequate memory
and see Understand the risks of running out of memory.
restart_policy
Configures if and how to restart containers when they exit. Replaces
restart
.
condition
: One ofnone
,on-failure
orany
(default:any
).delay
: How long to wait between restart attempts, specified as a
duration (default: 5s).max_attempts
: How many times to attempt to restart a container before giving
up (default: never give up). If the restart does not succeed within the configured
window
, this attempt doesn’t count toward the configuredmax_attempts
value.
For example, ifmax_attempts
is set to ‘2’, and the restart fails on the first
attempt, more than two restarts may be attempted.window
: How long to wait before deciding if a restart has succeeded,
specified as a duration (default:
decide immediately).
version: "3.9"
services:
redis:
image: redis:alpine
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
rollback_config
Added in version 3.7 file format.
Configures how the service should be rollbacked in case of a failing
update.
parallelism
: The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.delay
: The time to wait between each container group’s rollback (default 0s).failure_action
: What to do if a rollback fails. One ofcontinue
orpause
(defaultpause
)monitor
: Duration after each task update to monitor for failure(ns|us|ms|s|m|h)
(default 5s) Note: Setting to 0 will use the default 5s.max_failure_ratio
: Failure rate to tolerate during a rollback (default 0).order
: Order of operations during rollbacks. One ofstop-first
(old task is stopped before starting new one), orstart-first
(new task is started first, and the running tasks briefly overlap) (defaultstop-first
).
update_config
Configures how the service should be updated. Useful for configuring rolling
updates.
parallelism
: The number of containers to update at a time.delay
: The time to wait between updating a group of containers.failure_action
: What to do if an update fails. One ofcontinue
,rollback
, orpause
(default:pause
).monitor
: Duration after each task update to monitor for failure(ns|us|ms|s|m|h)
(default 5s) Note: Setting to 0 will use the default 5s.max_failure_ratio
: Failure rate to tolerate during an update.order
: Order of operations during updates. One ofstop-first
(old task is stopped before starting new one), orstart-first
(new task is started first, and the running tasks briefly overlap) (defaultstop-first
) Note: Only supported for v3.4 and higher.
Added in version 3.4 file format.
The
order
option is only supported by v3. 4 and higher of the compose
file format.
version: "3.9"
services:
vote:
image: dockersamples/examplevotingapp_vote:before
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
order: stop-first
Not supported for
docker stack deploy
The following sub-options (supported for docker-compose up
and docker-compose run
) are not supported for docker stack deploy
or the deploy
key.
Tip
See the section on how to configure volumes for services, swarms, and docker-stack.yml
files. Volumes are supported
but to work with swarms and services, they must be configured as named volumes
or associated with services that are constrained to nodes with access to the
requisite volumes.
devices
List of device mappings. Uses the same format as the --device
docker
client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
Note when using docker stack deploy
The
devices
option is ignored when
deploying a stack in swarm mode
dns
Custom DNS servers. Can be a single value or a list.
dns_search
Custom DNS search domains. Can be a single value or a list.
dns_search:
- dc1.example.com
- dc2.example.com
entrypoint
Override the default entrypoint.
entrypoint: /code/entrypoint.sh
The entrypoint can also be a list, in a manner similar to
dockerfile:
entrypoint: ["php", "-d", "memory_limit=-1", "vendor/bin/phpunit"]
Note
Setting
entrypoint
both overrides any default entrypoint set on the service’s
image with theENTRYPOINT
Dockerfile instruction, and clears out any default
command on the image – meaning that if there’s aCMD
instruction in the
Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.
If you have specified a Compose file with docker-compose -f FILE
, paths in
env_file
are relative to the directory that file is in.
Environment variables declared in the environment section
override these values – this holds true even if those values are
empty or undefined.
env_file:
- ./common.env
- ./apps/web.env
- /opt/runtime_opts.env
Compose expects each line in an env file to be in VAR=VAL
format. Lines
beginning with #
are treated as comments and are ignored. Blank lines are
also ignored.
# Set Rails/Rack environment
RACK_ENV=development
Note
If your service specifies a build option, variables defined in
environment files are not automatically visible during the build. Use
the args sub-option ofbuild
to define build-time environment
variables.
The value of VAL
is used as is and not modified at all. For example if the
value is surrounded by quotes (as is often the case of shell variables), the
quotes are included in the value passed to Compose.
Keep in mind that the order of files in the list is significant in determining
the value assigned to a variable that shows up more than once. The files in the
list are processed from the top down. For the same variable specified in file
a.env
and assigned a different value in file b.env
, if b.env
is
listed below (after), then the value from b.env
stands. For example, given the
following declaration in docker-compose.yml
:
services:
some-service:
env_file:
- a.env
- b.env
And the following files:
and
$VAR
is hello
.
environment
Add environment variables. You can use either an array or a dictionary. Any
boolean values (true, false, yes, no) need to be enclosed in quotes to ensure
they are not converted to True or False by the YML parser.
Environment variables with only a key are resolved to their values on the
machine Compose is running on, which can be helpful for secret or host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note
If your service specifies a build option, variables defined in
environment
are not automatically visible during the build. Use the
args sub-option ofbuild
to define build-time environment
variables.
expose
Expose ports without publishing them to the host machine – they’ll only be
accessible to linked services. Only the internal port can be specified.
expose:
- "3000"
- "8000"
external_links
Link to containers started outside this docker-compose.yml
or even outside of
Compose, especially for containers that provide shared or common services.
external_links
follow semantics similar to the legacy option links
when
specifying both the container name and the link alias (CONTAINER:ALIAS
).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
Note
The externally-created containers must be connected to at least one of the same
networks as the service that is linking to them. Links
are a legacy option. We recommend using networks instead.
Note when using docker stack deploy
The
external_links
option is ignored when
deploying a stack in swarm mode
Add hostname mappings. Use the same values as the docker client --add-host
parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname is created in /etc/hosts
inside containers for this service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
healthcheck
Configure a check that’s run to determine whether or not containers for this
service are “healthy”. See the docs for the
HEALTHCHECK Dockerfile instruction
for details on how healthchecks work.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
interval
, timeout
and start_period
are specified as
durations.
Added in version 3.4 file format.
The
start_period
option was added in file format 3. 4.
test
must be either a string or a list. If it’s a list, the first item must be
either NONE
, CMD
or CMD-SHELL
. If it’s a string, it’s equivalent to
specifying CMD-SHELL
followed by that string.
# Hit the local web app
test: ["CMD", "curl", "-f", "http://localhost"]
As above, but wrapped in /bin/sh
. Both forms below are equivalent.
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
test: curl -f https://localhost || exit 1
To disable any default healthcheck set by the image, you can use disable: true
.
This is equivalent to specifying test: ["NONE"]
.
healthcheck:
disable: true
image
Specify the image to start the container from. Can either be a repository/tag or
a partial image ID.
image: example-registry. com:4000/postgresql
If the image does not exist, Compose attempts to pull it, unless you have also
specified build, in which case it builds it using the specified
options and tags it with the specified tag.
init
Added in version 3.7 file format.
Run an init inside the container that forwards signals and reaps processes.
Set this option to true
to enable this feature for the service.
version: "3.9"
services:
web:
image: alpine:latest
init: true
The default init binary that is used is Tini,
and is installed in/usr/libexec/docker-init
on the daemon host. You can
configure the daemon to use a custom init binary through the
init-path
configuration option.
isolation
Specify a container’s isolation technology. On Linux, the only supported value
is default
. On Windows, acceptable values are default
, process
and
hyperv
. Refer to the
Docker Engine docs
for details.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software.
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
links
Warning
The
--link
flag is a legacy feature of Docker. It may eventually be removed.
Unless you absolutely need to continue using it, we recommend that you use
user-defined networks
to facilitate communication between two containers instead of using--link
.One feature that user-defined networks do not support that you can do with
--link
is sharing environmental variables between containers. However, you
can use other mechanisms such as volumes to share environment variables between
containers in a more controlled way.
Link to containers in another service. Either specify both the service name and
a link alias ("SERVICE:ALIAS"
), or just the service name.
web:
links:
- "db"
- "db:database"
- "redis"
Containers for the linked service are reachable at a hostname identical to
the alias, or the service name if no alias was specified.
Links are not required to enable services to communicate – by default,
any service can reach any other service at that service’s name. (See also, the
Links topic in Networking in Compose.)
Links also express dependency between services in the same way as
depends_on, so they determine the order of service startup.
Note
If you define both links and networks, services with
links between them must share at least one network in common to
communicate.
Note when using docker stack deploy
The
links
option is ignored when
deploying a stack in swarm mode
logging
Logging configuration for the service.
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
The driver
name specifies a logging driver for the service’s
containers, as with the --log-driver
option for docker run
(documented here).
The default value is json-file.
Note
Only the
json-file
andjournald
drivers make the logs available directly
fromdocker-compose up
anddocker-compose logs
. Using any other driver
does not print any logs.
Specify logging options for the logging driver with the options
key, as with the --log-opt
option for docker run
.
Logging options are key-value pairs. An example of syslog
options:
driver: "syslog"
options:
syslog-address: "tcp://192.168.0.42:123"
The default driver json-file, has options to limit the amount of logs stored. To do this, use a key-value pair for maximum storage size and maximum number of files:
options:
max-size: "200k"
max-file: "10"
The example shown above would store log files until they reach a max-size
of
200kB, and then rotate them. The amount of individual log files stored is
specified by the max-file
value. As logs grow beyond the max limits, older log
files are removed to allow storage of new logs.
Here is an example docker-compose.yml
file that limits logging storage:
version: "3.9"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
Logging options available depend on which logging driver you use
The above example for controlling log files and sizes uses options
specific to the json-file driver.
These particular options are not available on other logging drivers.
For a full list of supported logging drivers and their options, refer to the
logging drivers documentation.
network_mode
Network mode. Use the same values as the docker client --network
parameter, plus
the special form service:[service name]
.
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
Note
networks
Networks to join, referencing entries under the
top-level networks
key.
services:
some-service:
networks:
- some-network
- other-network
aliases
Aliases (alternative hostnames) for this service on the network. Other containers on the same network can use either the service name or this alias to connect to one of the service’s containers.
Since aliases
is network-scoped, the same service can have different aliases on different networks.
Note
A network-wide alias can be shared by multiple containers, and even by multiple
services. If it is, then exactly which container the name resolves to is not
guaranteed.
The general format is shown here.
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2
In the example below, three services are provided (web
, worker
, and db
),
along with two networks (new
and legacy
). The db
service is reachable at
the hostname db
or database
on the new
network, and at db
or mysql
on
the legacy
network.
version: "3. 9"
services:
web:
image: "nginx:alpine"
networks:
- new
worker:
image: "my-worker-image:latest"
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql
networks:
new:
legacy:
ipv4_address, ipv6_address
Specify a static IP address for containers for this service when joining the network.
The corresponding network configuration in the
top-level networks section must have an
ipam
block with subnet configurations covering each static address.
If IPv6 addressing is desired, the
enable_ipv6
option must be set, and you must use a version 2.x Compose file.
IPv6 options do not currently work in swarm mode.
An example:
version: "3.9"
services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16. 238.10
ipv6_address: 2001:3984:3989::10
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "2001:3984:3989::/64"
pid
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag can access and manipulate other
containers in the bare-metal machine’s namespace and vice versa.
ports
Expose ports.
Note
Port mapping is incompatible with
network_mode: host
Note
docker-compose run
ignoresports
unless you include--service-ports
.
Short syntax
There are three options:
- Specify both ports (
HOST:CONTAINER
) - Specify just the container port (an ephemeral host port is chosen for the host port).
- Specify the host IP address to bind to AND both ports (the default is 0.0.0.0, meaning all interfaces): (
IPADDR:HOSTPORT:CONTAINERPORT
). If HOSTPORT is empty (for example127.0.0.1::80
), an ephemeral port is chosen to bind to on the host.
Note
When mapping ports in the
HOST:CONTAINER
format, you may experience
erroneous results when using a container port lower than 60, because YAML
parses numbers in the formatxx:yy
as a base-60 value. For this reason,
we recommend always explicitly specifying your port mappings as strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "127.0.0.1::5000"
- "6060:6060/udp"
- "12400-12500:1240"
Long syntax
The long form syntax allows the configuration of additional fields that can’t be
expressed in the short form.
target
: the port inside the containerpublished
: the publicly exposed portprotocol
: the port protocol (tcp
orudp
)mode
:host
for publishing a host port on each node, oringress
for a swarm
mode port to be load balanced.
ports:
- target: 80
published: 8080
protocol: tcp
mode: host
Added in version 3.2 file format.
The long syntax is new in the v3.2 file format.
profiles
profiles: ["frontend", "debug"]
profiles:
- frontend
- debug
profiles
defines a list of named profiles for the service to be enabled under.
When not set, the service is always enabled. For the services that make up
your core application you should omit profiles
so they will always be started.
Valid profile names follow the regex format [a-zA-Z0-9][a-zA-Z0-9_. -]+
.
See also Using profiles with Compose to learn more about
profiles.
restart
no
is the default restart policy, and it does not restart a container under
any circumstance. When always
is specified, the container always restarts. The
on-failure
policy restarts a container if the exit code indicates an
on-failure error. unless-stopped
always restarts a container, except when the
container is stopped (manually or otherwise).
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped
Note when using docker stack deploy
The
restart
option is ignored when
deploying a stack in swarm mode.
secrets
Grant access to secrets on a per-service basis using the per-service secrets
configuration. Two different syntax variants are supported.
Note when using docker stack deploy
The secret must already exist or be
defined in the top-levelsecrets
configuration
of the compose file, or stack deployment fails.
For more information on secrets, see secrets.
Short syntax
The short syntax variant only specifies the secret name. This grants the
container access to the secret and mounts it at /run/secrets/<secret_name>
within the container. The source name and destination mountpoint are both set
to the secret name.
The following example uses the short syntax to grant the redis
service
access to the my_secret
and my_other_secret
secrets. The value of
my_secret
is set to the contents of the file ./my_secret.txt
, and
my_other_secret
is defined as an external resource, which means that it has
already been defined in Docker, either by running the docker secret create
command or by another stack deployment. If the external secret does not exist,
the stack deployment fails with a secret not found
error.
version: "3.9"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- my_secret
- my_other_secret
secrets:
my_secret:
file: . /my_secret.txt
my_other_secret:
external: true
Long syntax
The long syntax provides more granularity in how the secret is created within
the service’s task containers.
source
: The identifier of the secret as it is defined in this configuration.target
: The name of the file to be mounted in/run/secrets/
in the
service’s task containers. Defaults tosource
if not specified.uid
andgid
: The numeric UID or GID that owns the file within
/run/secrets/
in the service’s task containers. Both default to0
if not
specified.mode
: The permissions for the file to be mounted in/run/secrets/
in the service’s task containers, in octal notation. For instance,0444
represents world-readable. The default in Docker 1.13.1 is0000
, but is
be0444
in newer versions. Secrets cannot be writable because they are mounted
in a temporary filesystem, so if you set the writable bit, it is ignored. The
executable bit can be set. If you aren’t familiar with UNIX file permission
modes, you may find this
permissions calculator
useful.
The following example sets name of the my_secret
to redis_secret
within the
container, sets the mode to 0440
(group-readable) and sets the user and group
to 103
. The redis
service does not have access to the my_other_secret
secret.
version: "3.9"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- source: my_secret
target: redis_secret
uid: '103'
gid: '103'
mode: 0440
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true
You can grant a service access to multiple secrets and you can mix long and
short syntax. Defining a secret does not imply granting a service access to it.
security_opt
Override the default labeling scheme for each container.
security_opt:
- label:user:USER
- label:role:ROLE
Note when using docker stack deploy
The
security_opt
option is ignored when
deploying a stack in swarm mode.
stop_grace_period
Specify how long to wait when attempting to stop a container if it doesn’t
handle SIGTERM (or whatever stop signal has been specified with
stop_signal
), before sending SIGKILL. Specified
as a duration.
By default, stop
waits 10 seconds for the container to exit before sending
SIGKILL.
stop_signal
Sets an alternative signal to stop the container. By default stop
uses
SIGTERM. Setting an alternative signal using stop_signal
causes
stop
to send that signal instead.
sysctls
Kernel parameters to set in the container. You can use either an array or a
dictionary.
sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
You can only use sysctls that are namespaced in the kernel. Docker does not
support changing sysctls inside a container that also modify the host system.
For an overview of supported sysctls, refer to
configure namespaced kernel parameters (sysctls) at runtime.
Note when using docker stack deploy
This option requires Docker Engine 19.03 or up when
deploying a stack in swarm mode.
tmpfs
Added in version 3.6 file format.
Mount a temporary file system inside the container. Can be a single value or a list.
Note when using docker stack deploy
This option is ignored when
deploying a stack in swarm mode
with a (version 3-3. 5) Compose file.
Mount a temporary file system inside the container. Size parameter specifies the size
of the tmpfs mount in bytes. Unlimited by default.
- type: tmpfs
target: /app
tmpfs:
size: 1000
ulimits
Override the default ulimits for a container. You can either specify a single
limit as an integer or soft/hard limits as a mapping.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
userns_mode
Disables the user namespace for this service, if Docker daemon is configured with user namespaces.
See dockerd for
more information.
Note when using docker stack deploy
The
userns_mode
option is ignored when
deploying a stack in swarm mode.
volumes
Mount host paths or named volumes, specified as sub-options to a service.
You can mount a host path as part of a definition for a single service, and
there is no need to define it in the top level volumes
key.
But, if you want to reuse a volume across multiple services, then define a named
volume in the top-level volumes
key. Use
named volumes with services, swarms, and stack
files.
Changed in version 3 file format.
The top-level volumes key defines
a named volume and references it from each service’svolumes
list. This
replacesvolumes_from
in earlier versions of the Compose file format.
This example shows a named volume (mydata
) being used by the web
service,
and a bind mount defined for a single service (first path under db
service
volumes
). The db
service also uses a named volume called dbdata
(second
path under db
service volumes
), but defines it using the old string format
for mounting a named volume. Named volumes must be listed under the top-level
volumes
key, as shown.
version: "3.9"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"
volumes:
mydata:
dbdata:
Note
For general information on volumes, refer to the use volumes
and volume plugins sections in the documentation.
Short syntax
The short syntax uses the generic [SOURCE:]TARGET[:MODE]
format, where
SOURCE
can be either a host path or volume name. TARGET
is the container
path where the volume is mounted. Standard modes are ro
for read-only
and rw
for read-write (default).
You can mount a relative path on the host, which expands relative to
the directory of the Compose configuration file being used. Relative paths
should always begin with .
or ..
.
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
Long syntax
Added in version 3.2 file format.
The long form syntax allows the configuration of additional fields that can’t be
expressed in the short form.
type
: the mount typevolume
,bind
,tmpfs
ornpipe
source
: the source of the mount, a path on the host for a bind mount, or the
name of a volume defined in the
top-levelvolumes
key. Not applicable for a tmpfs mount.target
: the path in the container where the volume is mountedread_only
: flag to set the volume as read-onlybind
: configure additional bind optionspropagation
: the propagation mode used for the bind
volume
: configure additional volume optionsnocopy
: flag to disable copying of data from a container when a volume is
created
tmpfs
: configure additional tmpfs optionssize
: the size for the tmpfs mount in bytes
version: "3.9"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
Note
When creating bind mounts, using the long syntax requires the
referenced folder to be created beforehand. Using the short syntax
creates the folder on the fly if it doesn’t exist.
See the bind mounts documentation
for more information.
Volumes for services, swarms, and stack files
Note when using docker stack deploy
When working with services, swarms, and
docker-stack.yml
files, keep in mind
that the tasks (containers) backing a service can be deployed on any node in a
swarm, and this may be a different node each time the service is updated.
In the absence of having named volumes with specified sources, Docker creates an
anonymous volume for each task backing a service. Anonymous volumes do not
persist after the associated containers are removed.
If you want your data to persist, use a named volume and a volume driver that
is multi-host aware, so that the data is accessible from any node. Or, set
constraints on the service so that its tasks are deployed on a node that has the
volume present.
As an example, the docker-stack.yml
file for the
votingapp sample in Docker Labs
defines a service called db
that runs a postgres
database. It is configured
as a named volume to persist the data on the swarm, and is constrained to run
only on manager
nodes. Here is the relevant snip-it from that file:
version: "3.9"
services:
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
domainname, hostname, ipc, mac_address, privileged, read_only, shm_size, stdin_open, tty, user, working_dir
Each of these is a single value, analogous to its
docker run counterpart. Note that mac_address
is a legacy option.
user: postgresql
working_dir: /code
domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
privileged: true
read_only: true
shm_size: 64M
stdin_open: true
tty: true
Specifying durations
Some configuration options, such as the interval
and timeout
sub-options for
check
, accept a duration as a string in a
format that looks like this:
2.5s
10s
1m30s
2h42m
5h44m56s
The supported units are us
, ms
, s
, m
and h
.
Specifying byte values
Some configuration options, such as the shm_size
sub-option for
build
, accept a byte value as a string in a format
that looks like this:
The supported units are b
, k
, m
and g
, and their alternative notation kb
,
mb
and gb
. Decimal values are not supported at this time.
Volume configuration reference
While it is possible to declare volumes on the fly as part of the
service declaration, this section allows you to create named volumes that can be
reused across multiple services (without relying on volumes_from
), and are
easily retrieved and inspected using the docker command line or API.
See the docker volume
subcommand documentation for more information.
See use volumes and volume
plugins for general information on volumes.
Here’s an example of a two-service setup where a database’s data directory is
shared with another service as a volume so that it can be periodically backed
up:
version: "3.9"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
An entry under the top-level volumes
key can be empty, in which case it
uses the default driver configured by the Engine (in most cases, this is the
local
driver). Optionally, you can configure it with the following keys:
driver
Specify which volume driver should be used for this volume. Defaults to whatever
driver the Docker Engine has been configured to use, which in most cases is
local
. If the driver is not available, the Engine returns an error when
docker-compose up
tries to create the volume.
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this
volume. Those options are driver-dependent – consult the driver’s
documentation for more information. Optional.
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
external
If set to true
, specifies that this volume has been created outside of
Compose. docker-compose up
does not attempt to create it, and raises
an error if it doesn’t exist.
For version 3.3 and below of the format, external
cannot be used in
conjunction with other volume configuration keys (driver
, driver_opts
,
labels
). This limitation no longer exists for
version 3.4 and above.
In the example below, instead of attempting to create a volume called
[projectname]_data
, Compose looks for an existing volume simply
called data
and mount it into the db
service’s containers.
version: "3.9"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
Deprecated in version 3.4 file format.
external.name was deprecated in version 3.4 file format use
name
instead.
You can also specify the name of the volume separately from the name used to
refer to it within the Compose file:
volumes:
data:
external:
name: actual-name-of-volume
Note when using docker stack deploy
External volumes that do not exist are created if you use docker stack deploy
to launch the app in swarm mode (instead of
docker compose up). In swarm mode, a volume is
automatically created when it is defined by a service. As service tasks are
scheduled on new nodes, swarmkit
creates the volume on the local node. To learn more, see moby/moby#29976.
labels
Add metadata to containers using
Docker labels. You can use either
an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from
conflicting with those used by other software.
labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"
name
Added in version 3.4 file format.
Set a custom name for this volume. The name field can be used to reference
volumes that contain special characters. The name is used as is
and will not be scoped with the stack name.
version: "3.9"
volumes:
data:
name: my-app-data
It can also be used in conjunction with the external
property:
version: "3.9"
volumes:
data:
external: true
name: my-app-data
Network configuration reference
The top-level networks
key lets you specify networks to be created.
driver
Specify which driver should be used for this network.
The default driver depends on how the Docker Engine you’re using is configured,
but in most instances it is bridge
on a single host and overlay
on a
Swarm.
The Docker Engine returns an error if the driver is not available.
bridge
Docker defaults to using a bridge
network on a single host. For examples of
how to work with bridge networks, see the Docker Labs tutorial on
Bridge networking.
overlay
The overlay
driver creates a named network across multiple nodes in a
swarm.
host or none
Use the host’s networking stack, or no networking. Equivalent to
docker run --net=host
or docker run --net=none
. Only used if you use
docker stack
commands. If you use the docker-compose
command,
use network_mode instead.
If you want to use a particular network on a common build, use [network] as
mentioned in the second yaml file example.
The syntax for using built-in networks such as host
and none
is a little
different. Define an external network with the name host
or none
(that
Docker has already created automatically) and an alias that Compose can use
(hostnet
or nonet
in the following examples), then grant the service access to that
network using the alias.
version: "3.9"
services:
web:
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
services:
web:
...
build:
...
network: host
context: .
...
services:
web:
...
networks:
nonet: {}
networks:
nonet:
external: true
name: none
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this
network. Those options are driver-dependent – consult the driver’s
documentation for more information. Optional.
driver_opts:
foo: "bar"
baz: 1
attachable
Added in version 3.2 file format.
Only used when the driver
is set to overlay
. If set to true
, then
standalone containers can attach to this network, in addition to services. If a
standalone container attaches to an overlay network, it can communicate with
services and standalone containers that are also attached to the overlay
network from other Docker daemons.
networks:
mynet1:
driver: overlay
attachable: true
enable_ipv6
Enable IPv6 networking on this network.
Not supported in Compose File version 3
enable_ipv6
requires you to use a version 2 Compose file, as this directive
is not yet supported in Swarm mode.
ipam
Specify custom IPAM config. This is an object with several properties, each of
which is optional:
driver
: Custom IPAM driver, instead of the default.config
: A list with zero or more config blocks, each containing any of
the following keys:subnet
: Subnet in CIDR format that represents a network segment
A full example:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
Note
Additional IPAM configurations, such as
gateway
, are only honored for version 2 at the moment.
internal
By default, Docker also connects a bridge network to it to provide external
connectivity. If you want to create an externally isolated overlay network,
you can set this option to true
.
labels
Add metadata to containers using
Docker labels. You can use either
an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from
conflicting with those used by other software.
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
external
If set to true
, specifies that this network has been created outside of
Compose. docker-compose up
does not attempt to create it, and raises
an error if it doesn’t exist.
For version 3.3 and below of the format, external
cannot be used in
conjunction with other network configuration keys (driver
, driver_opts
,
ipam
, internal
). This limitation no longer exists for
version 3.4 and above.
In the example below, proxy
is the gateway to the outside world. Instead of
attempting to create a network called [projectname]_outside
, Compose
looks for an existing network simply called outside
and connect the proxy
service’s containers to it.
version: "3.9"
services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default
networks:
outside:
external: true
Deprecated in version 3.5 file format.
external.name was deprecated in version 3.5 file format use
name
instead.
You can also specify the name of the network separately from the name used to
refer to it within the Compose file:
version: "3.9"
networks:
outside:
external:
name: actual-name-of-network
name
Added in version 3.5 file format.
Set a custom name for this network. The name field can be used to reference
networks which contain special characters. The name is used as is
and will not be scoped with the stack name.
version: "3.9"
networks:
network1:
name: my-app-net
It can also be used in conjunction with the external
property:
version: "3.9"
networks:
network1:
external: true
name: my-app-net
configs configuration reference
The top-level configs
declaration defines or references
configs that can be granted to the services in
this stack. The source of the config is either file
or external
.
file
: The config is created with the contents of the file at the specified
path.external
: If set to true, specifies that this config has already been
created. Docker does not attempt to create it, and if it does not exist, a
config not found
error occurs.name
: The name of the config object in Docker. This field can be used to
reference configs that contain special characters. The name is used as is
and will not be scoped with the stack name. Introduced in version 3.5
file format.driver
anddriver_opts
: The name of a custom secret driver, and driver-specific
options passed as key/value pairs. Introduced in version 3.8 file format, and
only supported when usingdocker stack
.template_driver
: The name of the templating driver to use, which controls
whether and how to evaluate the secret payload as a template. If no driver
is set, no templating is used. The only driver currently supported isgolang
,
which uses agolang
. Introduced in version 3.8 file format, and only supported
when usingdocker stack
. Refer to use a templated config
for a examples of templated configs.
In this example, my_first_config
is created (as
<stack_name>_my_first_config)
when the stack is deployed,
and my_second_config
already exists in Docker.
configs:
my_first_config:
file: ./config_data
my_second_config:
external: true
Another variant for external configs is when the name of the config in Docker
is different from the name that exists within the service. The following
example modifies the previous one to use the external config called
redis_config
.
configs:
my_first_config:
file: ./config_data
my_second_config:
external:
name: redis_config
You still need to grant access to the config to each service in the
stack.
secrets configuration reference
The top-level secrets
declaration defines or references
secrets that can be granted to the services in
this stack. The source of the secret is either file
or external
.
file
: The secret is created with the contents of the file at the specified
path.external
: If set to true, specifies that this secret has already been
created. Docker does not attempt to create it, and if it does not exist, a
secret not found
error occurs.name
: The name of the secret object in Docker. This field can be used to
reference secrets that contain special characters. The name is used as is
and will not be scoped with the stack name. Introduced in version 3.5
file format.template_driver
: The name of the templating driver to use, which controls
whether and how to evaluate the secret payload as a template. If no driver
is set, no templating is used. The only driver currently supported isgolang
,
which uses agolang
. Introduced in version 3.8 file format, and only
supported when usingdocker stack
.
In this example, my_first_secret
is created as
<stack_name>_my_first_secret
when the stack is deployed,
and my_second_secret
already exists in Docker.
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
Another variant for external secrets is when the name of the secret in Docker
is different from the name that exists within the service. The following
example modifies the previous one to use the external secret called
redis_secret
.
Compose File v3.5 and above
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
name: redis_secret
Compose File v3.4 and under
my_second_secret:
external:
name: redis_secret
You still need to grant access to the secrets to each service in the
stack.
Variable substitution
Your configuration options can contain environment variables. Compose uses the
variable values from the shell environment in which docker-compose
is run. For
example, suppose the shell contains POSTGRES_VERSION=9.3
and you supply this
configuration:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up
with this configuration, Compose looks for the
POSTGRES_VERSION
environment variable in the shell and substitutes its value
in. For this example, Compose resolves the image
to postgres:9.3
before
running the configuration.
If an environment variable is not set, Compose substitutes with an empty
string. In the example above, if POSTGRES_VERSION
is not set, the value for
the image
option is postgres:
.
You can set default values for environment variables using a
.env
file, which Compose automatically looks for in
project directory (parent folder of your Compose file).
Values set in the shell environment override those set in the .env
file.
Note when using docker stack deploy
The
.env file
feature only works when you use thedocker-compose up
command
and does not work withdocker stack deploy
.
Both $VARIABLE
and ${VARIABLE}
syntax are supported. Additionally when using
the 2.1 file format, it is possible to
provide inline default values using typical shell syntax:
${VARIABLE:-default}
evaluates todefault
ifVARIABLE
is unset or
empty in the environment.${VARIABLE-default}
evaluates todefault
only ifVARIABLE
is unset
in the environment.
Similarly, the following syntax allows you to specify mandatory variables:
${VARIABLE:?err}
exits with an error message containingerr
if
VARIABLE
is unset or empty in the environment.${VARIABLE?err}
exits with an error message containingerr
if
VARIABLE
is unset in the environment.
Other extended shell-style features, such as ${VARIABLE/foo/bar}
, are not
supported.
You can use a $$
(double-dollar sign) when your configuration needs a literal
dollar sign. This also prevents Compose from interpolating a value, so a $$
allows you to refer to environment variables that you don’t want processed by
Compose.
web:
build: .
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"
If you forget and use a single dollar sign ($
), Compose interprets the value
as an environment variable and warns you:
The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.
Extension fields
Added in version 3.4 file format.
It is possible to re-use configuration fragments using extension fields. Those
special fields can be of any format as long as they are located at the root of
your Compose file and their name start with the x-
character sequence.
Note
Starting with the 3.7 format (for the 3.x series) and 2.4 format
(for the 2.x series), extension fields are also allowed at the root
of service, volume, network, config and secret definitions.
version: "3.9"
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"
The contents of those fields are ignored by Compose, but they can be
inserted in your resource definitions using YAML anchors.
For example, if you want several of your services to use the same logging
configuration:
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
You may write your Compose file as follows:
version: "3.9"
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging
It is also possible to partially override values in extension fields using
the YAML merge type. For example:
version: "3.9"
x-volumes:
&default-volume
driver: foobar-storage
services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2", "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local
Compose documentation
fig, composition, compose version 3, docker90,000 First computer designed in Germany
Children play on game consoles, minicomputers control washing machines, driving assistants make driving easier, and wearable electronics record vital functions. Smartphones, tablets and PCs are found in almost every home. Supercomputers calculate the weather forecast. And a new generation of computers, recently unveiled at the Hannover Messe under the name “cognitive computing”, may even already interact directly with people.Life without computers today is already unthinkable.
More than 75 years ago, the situation looked different. On May 12, 1941, German inventor Konrad Zuse presented the first true full-featured computer, the Z3. True, he could only multiply, divide, calculate the square root and store no more than 64 words, but he was the first programmable computer in the world that worked with a binary number system. “But there was no big excitement: no press, no world sensation – there was a war,” says Horst Zuse, the eldest son of a computer visionary.A few years ago, he himself presented his own copy of the Z3, corresponding to the original. The original itself was destroyed during bombing raids in December 1943.
After the war, Konrad Zuse tried to make money with his idea and founded his own company in Hesse. But in the 1960s, it was taken over by Siemens. Competition from the United States and Germany has long overtaken its technical advantage and even overtook it. But the historical glory remained.
The nickname of the inventor Konrad Zuse was Kuno Zee
But Zuse, who passed away in 1995, had other talents as well.His pseudonym Kuno See can be found under many oil paintings, under drawings and art prints, a large number of which are in the State Graphic Collection in Munich. Some of the works are exhibited at the Konrad Zuse Museum in Hünfeld, the city in which he lived for a long time, in the Astronomy and Physics Cabinet in Kassel, and Konrad Zuse’s paintings could also be admired at documenta 13 in Kassel. His next saying remained in my memory: “True, I have not studied art, but I have never studied computer science either.»
www.zuse-museum.de
© www.deutschland.de
90,000 how to create a real computer in Minecraft: about the possibilities of red stone
Building enthusiasts and professionals in the famous Minecraft are driving this popular game to new heights at an accelerated pace. And quite successfully, I must say. Recently, gamers have managed to simulate the working components of a computer in the game. Simply put, we have created almost a computer inside a computer.
Recently, a duo of especially advanced Minecraft masters officially announced the creation of functional hard drives in the game that can read and write data.One of these disks (invented by user smellystring) can store up to 1KB of data, and another one, created by The0JJ, can store up to 4KB. In this regard, the opinion has already appeared that now SkyNet and the Matrix are not such a fantasy. At least the day when the first virtual models of fully functional computers, obeying the laws of the physical world, will appear, has definitely become closer.
In fact, in Minecraft, players have long been working on the creation of in-game computers, more precisely, the so-called algorithmic logic devices.The game built giant virtual structures based on binary computing logic and simulating the main components of real computers. At the heart of this kind of invention is one of the components, which in Minecraft is called “redstone” (redstone), and thanks to which energy can be “charged” various devices. Those. circuits made of “red stone” are something like an analogue of electrical circuits in the real world (more about Minecraft in two dimensions).
How did you manage to use the capabilities of Redstone to create storage devices in Minecraft? The “stone” is used to activate pistons that reproduce the true and false values of the binary system, usually represented by 1 and 0.The principle of the device is illustrated by its creators with the following animation:
In fact, due to the large number of such pistons, cyclically redirecting the redstone signal between solid and empty blocks, a user of a kilobyte disk can save data in binary code using a solid block as 1, and an empty one as 0.
However, one more question cannot but arise, and even two: what kind of data is this and how can it be used in Minecraft? The topic is certainly interesting, although, for obvious reasons, it is poorly studied.Nevertheless, something can already be predicted. For example, since in Minecraft the reserves of one or another player are stored in the form of game data of various sizes, the gamer himself in the described way could well save a test or even an audio file if the virtual disk at his disposal is large enough and he finds the opportunity to transform the information into binary. According to one of the creators of the hard drive in Minecraft, the method is suitable for storing any information, up to 1KB for now.
In general, Minecraft lovers, as well as all connoisseurs of modern computer science and logic, have one more reason for thought.
In his post, one of the gamers once wrote: “One day we will build a real computer in Minecraft to play Minecraft on it. And after that the Universe will collapse. ” But the fact is that this has become a reality: a 2D version of Minecraft, where you can play Minecraft on a Redstone computer, already exists.
The smartphone is being transformed… to the computer | Samsung RU
DeX docking station overview
The dream of using a smartphone as a full-fledged PC exists almost as long as the mobile devices themselves. Could she finally come true? We are looking for the answer to this question by connecting the flagship from Samsung to the Samsung DeX dock.
First impressions
The
Samsung DeX is a round device with a diameter of only 10.5 cm.Convenient: The compact docking station is easy to carry and can be used anywhere – from the hotel lobby to the library. The device is compatible with Samsung’s flagship smartphones released since 2017, and looks like this: the station’s design is immediately associated with the Galaxy S8 and S8 +.
The station is equipped with many connectors for a wide range of peripheral devices: two USB ports to which you can connect a mouse (its use is required) and a keyboard, an HDMI output for connecting a large screen, a network connector and a port for power supply from a charger with a USB Type connector C.
Getting started with DeX is easy: just plug in your charger, TV or monitor and mouse, and then plug your smartphone into the docking station. No additional settings are required – the smartphone will automatically detect the type of connected screen. FHD and 4K displays are only supported in Smartphone mirroring mode.
During operation
Samsung DeX displays the smartphone’s desktop on a large screen in a Windows-like interface familiar to all PCs: at the bottom there is a line with application shortcuts, on the right side there are status icons (for example, indicators of battery charge and network signal strength).On the desktop, you can create your own icons, depending on your personal ideas about convenience and efficiency.
DeX allows you to easily use all the necessary applications that are already loaded on your smartphone (from a web browser, office suite and Google Maps to instant messengers and email clients). The device allows you to take full advantage of the functionality of a smartphone on a PC – the interface offers such advantages as multitasking, multi-window mode and customizable window sizes.For example, you can watch movies and check for updates at the same time, or chat with friends on social networks. At the same time, Samsung took care of security: now you will not need to enter logins and passwords from insecure devices in public places, since all accounts will remain connected through your smartphone.
Connecting a keyboard turns DeX into a complete workspace: the docking station supports hotkeys such as Ctrl + C / V and Alt + Tab.Other convenient “computer” functions include support for right-clicking, grabbing and moving, zooming and scrolling.
Technical features
The device provides an Internet connection speed of up to 100 Mbps, a frequency sweep of 60 Hz, transmits data as quickly as possible and maintains a stable connection. At the same time, DeX easily withstands high loads – a powerful built-in fan allows you to maintain a stable temperature and protects the smartphone and the docking station itself from overheating.
Conclusions
The Samsung DeX docking station really turns your smartphone into a real computer, while retaining the functionality of a communication tool. The monitor displays a taskbar for accessing phone calls, text messages, and settings. Now you do not need to break away from work or play in order to just talk on the phone!
Epic Games Launcher Troubleshooter – Epic Games Support
If you have a problem with the Epic Games Launcher, use the following fix for the most common errors.
Check Epic Games Server Status
Visit the Epic Games Server Status page to ensure all systems are working correctly. If the Epic Games launcher is not working due to an interruption or system crash, your problem may be resolved once the system resumes normal operation.
Check for updates
Check for updates for the launcher. To do this, select “Settings” (gear in the lower left corner), if you see a button that says “RESTART AND UPDATE”, select it to update the launcher.
Clear the web cache of the launcher
Clearing the web cache often fixes display problems that can prevent you from using the launcher. Follow these steps to clear your web cache:
Exit the Epic Games Launcher by right-clicking the icon in the system tray in the lower right corner and selecting Exit from the menu that appears.
Press Windows + R , type “% localappdata%” and press Enter to open Explorer window .
Open the folder of the Epic Games Launcher.
Open folder Saved .
Select the folder Web Cache and delete it.
- Restart your computer and launch the Epic Games Launcher.
Update your graphics card drivers
To resolve the launcher crash issue, make sure you are using the latest graphics card drivers. How to update your graphics drivers is described in this article.
Open the launcher as administrator
Running the program as administrator raises its rights to avoid problems loading games. Follow these steps to run the program as administrator:
- Right-click the Epic Games Launcher shortcut.
- Select Run as administrator .
Reinstall Epic Games Launcher
All of your installed games will be removed.
Windows:
Run System File Checker, then reinstall the Epic Games Launcher.
- Exit the Epic Games Launcher by right-clicking the icon in the system tray in the lower right corner and selecting Exit from the menu that appears.
- Press Start .
- Type cmd, right click on Command Prompt and select Run as Administrator .
- In the window that opens, enter sfc / scannow and press the Enter key.
This may take some time. - Restart your computer.
- Press Start .
- Type Add or Remove Programs and press Enter .
- Select Epic Games Launcher from the list of programs.
- Press Delete .
- Go to www.epicgames.com and click Download Epic Games in the upper right corner to download the latest launcher installer.
On Mac:
- Close the Epic Games launcher.
- Open System Monitor and make sure you have no running processes associated with the Epic Games Launcher.
- Open the folder Applications .
- Click on the Epic Games Launcher and drag it to basket .
- Verify that all of the following directories no longer have Epic Games Launcher folders or files:
- ~ / Library / Application Support
- ~ / Library / Caches
- ~ / Library / Preferences
- ~ / Library / Logs
- ~ / Library / Cookies
- Go to www.epicgames.com and click Download Epic Games in the upper right corner to download the latest launcher installer.
Launcher freezes on macOS 10.15.1 or earlier
If your launcher hangs on macOS 10.15.1 or earlier, follow the steps above to reinstall the Epic Games Launcher on your Mac.
Check System Requirements
Make sure your computer meets the system requirements for the Epic Games Launcher found in this article. The system requirements for running the Epic Games Launcher can be found here.
Blinking Epic Games Launcher Icon in the System Tray
If you are unable to launch the Epic Games Launcher and see a blinking icon in the system tray, try the following steps to resolve the issue:
- Right-click the program’s shortcut launching Epic Games.
- Click Properties .
- Select Normal Window Size from the drop-down menu next to option Window .
- Select the Compatibility tab .
- Uncheck the boxes and click Apply , then OK .
- Open the Start Menu , then enter Imaging Options and press Enter .
- From the drop-down list under Graphics Performance Settings , select Desktop .
- Press Browse .
- Navigate to the Epic Games Launcher installation directory.
By default, this is C: / Program Files (x86) / Epic Games / Launcher / Portal / Binaries / Win64 . - Click on EpicGamesLauncher.exe and select Add .
- Press Parameters .
- Select Energy Saving .
- Press button Save .
- Restart the Epic Games Launcher.
If the steps above do not solve your problem, make sure you have all the latest Windows updates installed.For detailed instructions on how to do this, see this article.
Gaming PC Parts & Guided Setup | …
Central Processing Unit (CPU)
The Central Processing Unit (CPU), or simply the processor, is essentially the brain of your PC. This is where the magic happens – the computer program, when it starts up, sends a list of instructions to the processor (which are actually more like tasks). The processor performs operations in accordance with these “instructions” and sends signals to other components so that they know when to perform a particular task.
There are two main performance metrics that allow you to select the right CPU for your needs: the number of cores and the clock speed.
The number of cores tells us how many separate processors there are in one CPU module – in other words, how many CPU tasks can be performed simultaneously.
The clock speed tells us how quickly the CPU completes each task.
Several advanced processors support Hyper-Threading Technology, which allows each core to execute multiple threads and provides increased performance for multi-threaded applications.
Expert advice. Most modern processors are multi-core, and many modern games take advantage of this advantage, so you should choose a processor with at least four cores. Additional cores can be useful when new tasks are added, such as recording and streaming your gameplay.
Mainboard
The motherboard is the main printed circuit board of the PC and coordinates the operation of all components.The processor is installed directly on the motherboard. The processor and motherboard must be compatible. The Intel® Desktop Compatibility Tool can help you check compatibility. All other components – graphics cards, hard drives, RAM, optical drives, wireless cards – plug into the system board.
One of the ways to reduce the list of motherboards to consider when choosing motherboards is to determine the size. The most common form factors are Extended ATX, ATX, Micro-ATX, and Mini-ITX.
- Extended ATX – the largest boards (30.5 x 33 cm or 30.5 x 25.7 cm). They can have eight slots for RAM (the amount of RAM can be up to 128 GB).
- ATX – slightly smaller (12 x 9.6 inches). They usually have no more than 4 RAM slots.
- The MicroATX (9.6 “x 9.6”) cards can also have up to four RAM slots.
- Mini-ITX is the smallest form factor of the four (6.7 x 6.7 inches).They often have two RAM slots.
Expert Advice. Each component must be plugged into a motherboard, so choose a full-size motherboard that meets the hardware requirements of today’s and tomorrow’s.
Random access memory (RAM)
Random access memory (RAM) is used for short-term data storage. It is faster and easier to access than your PC’s long-term memory (SSD or hard drive), but it stores data temporarily.
This is where the PC stores the data that it actively uses (the very “command lists” that the CPU must read and execute). Determining the amount of RAM you need can be tricky. If there is more memory than is actually being used, it will not be used and you will just waste extra money. And too little RAM will have a negative impact on performance.
Ideally, the amount of RAM should be optimal. However, in general, the average gaming PC requires 8-16 GB of RAM.
When buying RAM, you need to remember what type of RAM your motherboard and processor support. If the speed of the RAM is not supported by your system, then the frequency will decrease for the system to work.
For more comprehensive information on purchasing RAM for your system, refer to our RAM guide.
Expert Tip: If you decide to use high-speed RAM, look out for memory with Intel® Extreme Memory Profile (Intel® XMP®).Without overclocking, high-speed RAM will operate at a standard speed lower than advertised. Intel® Extreme Memory Profile (Intel® XMP) facilitates overclocking with predefined and tested profiles.
Graphics processing unit (GPU)
There are two types of GPUs: integrated and discrete.
Integrated GPUs are built directly into the CPU.Integrated graphics have improved significantly over the past few years, although overall they are still inferior to discrete graphics.
Discrete Graphics are large, powerful components that connect to the motherboard via PCIe * and have their own resources, including video memory and (usually) active cooling. A discrete graphics adapter is an indispensable item for gamers playing modern graphics intensive games.Serious gamers need graphics adapters that deliver consistent frame rates of at least 60 frames per second (FPS) at the required resolution. At a lower frequency, the image may be choppy. Gamers looking to play in virtual reality should look for adapters that provide a stable frame rate of at least 90 frames per second.
Expert advice. The GPU isn’t the only component that affects frame rates, so it’s important to optimize your build , to eliminate performance bottlenecks.
Expert advice. Powerful graphics cards are quite expensive. If you want to save some money, check out the previous generation of graphics adapters. They can perform similarly at a lower cost.
Storage: solid state drives (including Intel® Optane ™ memory) and hard drives
There are two main types of storage systems: Solid State Drives (SSDs), including Intel® Optane ™ Memory) and Hard Drives (HD).There are pros and cons to choosing either SSD or HDD, although the good news is that you don’t have to choose just one.
hard drives store data on rotating platters. These plates contain magnetic material for storing data that is subsequently retrieved using a mechanical reader.
Hard drives come in two form factors:
- 2.5 inches , which are more commonly found in laptops and usually rotate at 5400 RPM./ min.
- 3.5-inch , which are more commonly found in desktop PCs and spin faster, often above 7200 RPM.
The SSDs use NAND-based memory – similar, but faster and more reliable than flash memory used in USB drives. Instead of a mechanical reader, they use integrated processors to access stored data, making them faster and less prone to mechanical failure than hard drives.The speed and convenience of solid state drives comes at a price, and their cost per gigabyte is higher than that of hard drives.
Modern SSDs are available with two protocols:
- Serial Advanced Technology Attachment (SATA) , which is older and has higher latency and lower peak bandwidth.
- Non-Volatile Memory Express * (NVMe *) , which uses the PCI Express * interface for higher performance.
Bridging the speed gap between traditional solid state drives and hard drives helps Intel® Optane ™ memory technology. Intel® Optane ™ memory uses 3D Xpoint memory technology to speed up slower storage devices (primarily hard drives) by remembering frequently used data and access patterns. Intel® Optane ™ memory remembers which games you play most and uses that data to speed up those games and load levels.
Expert advice. It is not necessary to make a choice in favor of one or another type of storage device. Many people use a small SSD as a boot drive (for the operating system, games, and other programs) and fill the rest of the bays with inexpensive hard drives for maximum storage capacity.
Power supply unit (PS)
The choice of a power supply unit (PSU) is a crucial step in any assembly. Don’t skimp – the PSU needs to be of high quality and powerful enough to support all existing and future components, and a good warranty won’t hurt.
The power supplies are available in non-modular, partially modular and fully modular designs.
- Non-modular power supplies are permanently connected to all wires. This is the cheapest option, but you need to find space to accommodate any cables that you probably won’t be using. Too many unused cables are difficult to optimize. They can obstruct airflow, which can negatively affect the performance of your computer.
- The partly modular PSUs are the best option for most. These units are equipped with multiple core wires and are less expensive than fully modular units.
- The fully modular PSUs are even easier to operate than the partially modular PSUs, but the added convenience is usually associated with a higher cost.
System Cooling – CPU Cooling & Chassis Airflow
There are two main ways to cool a PC: air cooling and liquid cooling.
The Air Cooled uses fans to draw hot air out of the system and away from the components, preventing them from overheating. The main advantages of air cooling are relatively low cost and ease of installation (small fans are easy to install in a case with components). The biggest drawback of air cooling is that it relies on efficient airflow to remove heat away from the components inside the chassis, so any obstruction in the airflow path can create problems.
The Liquid Cooled uses a liquid (such as distilled water) as the refrigerant, which absorbs the heat from the components and transfers it to an unobstructed area (and where the radiator is located). Liquid cooling is less dependent on airflow within the chassis and is therefore more efficient at cooling individual components. The disadvantage of liquid cooling systems is that they are usually bulkier and more difficult to install than standard fans.They are also more expensive.
In addition to the overall system cooling, you also need to purchase a separate processor cooler. CPU coolers are available in air and liquid form factors and install directly onto the processor. When shopping for a CPU cooler, it is important to make sure it is compatible with your CPU and fits your build.
Expert advice. In an air cooling system, the cooling efficiency depends not only and not so much on the number of fans.The quality of the fans and their location play an important role.
Peripherals
Monitors, keyboards, mice, headsets and other peripherals are selected based on personal preference. You do not need to purchase these with the components, but you will need a monitor, keyboard, and mouse to set up your system after assembly.
Expert advice. Keep balance when choosing your peripherals – if you’ve got the best components in the world but still use 1080p 60Hz monitors , you may not be able to take full advantage of your hardware.
Operating System (OS)
Last but not least, after assembling in the case, you need to prepare to install the operating system. The operating system is essential software that helps manage the interactions between the hardware and software components of a computer.
To prepare your PC OS in advance, determine which OS you want to install and write it to a USB stick. The installer for Windows * 10 can be downloaded here.If you are installing a paid OS such as Windows, you will need a product key.
90,000 Perhaps our world is virtual. But does it matter?
- Philip Ball
- BBC Earth
Photo author, Getty Images
Photo caption,
Keanu Reeves may be living in the matrix and off-set
Some scientists believe that our The universe is a giant computer simulation.Should we be worried about this?
Are we real? What about me personally?
Previously, only philosophers asked such questions. Scientists tried to understand what our world is and explain its laws.
But the recent considerations regarding the structure of the Universe pose existential questions for science as well.
Some physicists, cosmologists and artificial intelligence specialists suspect that we are all living inside a giant computer simulation, mistaking the virtual world for reality.
This idea contradicts our feelings: the world is too realistic to be a simulation. The weight of the cup in the hand, the aroma of the coffee poured into it, the sounds surrounding us – how can you fake such a wealth of experiences?
But think about the progress made in computing and information technology over the past few decades.
Today’s video games are inhabited by characters that interact realistically with the player, and virtual reality simulators sometimes make it indistinguishable from the world outside the window.
And this is quite enough to make a person paranoid.
In the fantastic movie “The Matrix” this idea is formulated very clearly. People there are trapped in a virtual world, which they unconditionally perceive as real.
The Matrix, however, is not the first film to explore the phenomenon of an artificial universe. Suffice it to recall David Cronenberg’s Videodrome (1982) or Terry Gilliam’s Brazil (1985).
All these dystopias raise two questions: how do we know that we live in a virtual world, and is it really so important?
Photo author, Getty Images
Photo caption,
Elon Musk, head of Tesla and SpaceX
The version that we live inside a simulation has influential supporters.
As stated in June 2016 by the American entrepreneur Elon Musk, the probability of this is “a billion to one”.
And Google’s AI CTO Raymond Kurzweil suggests that “our entire universe is a science experiment for a junior high school student from another universe.”
Some physicists are also ready to consider this possibility. In April 2016, scientists took part in a discussion of this topic at the American Museum of Natural History in New York.
None of these people claimed that in reality we are swimming naked in a sticky liquid, studded with wires, like the heroes of The Matrix.
But there are at least two possible scenarios according to which the universe around us can be artificial.
Cosmologist Alan Guth of the Massachusetts Institute of Technology suggests that the universe may be real, but it is also a laboratory experiment. According to his hypothesis, our world was created by a kind of superintelligence – just like biologists grow colonies of microorganisms.
Basically, there is nothing that would rule out the possibility of creating a universe as a result of an artificial Big Bang, says Guth.
The universe, in which such an experiment would have been carried out, would have remained intact. The new world would form in a separate space-time bubble that would quickly separate from the mother universe and lose contact with it.
This scenario does not affect our life in any way. Even if the universe originated in a “test tube” of superintelligence, it is physically as real as if it had formed naturally.
But there is a second scenario that is of particular interest because it undermines the very foundations of our understanding of reality.
Photo author, TAKE 27 LTD / SCIENCE PHOTO LIBRARY
Photo caption,
It is possible that our Universe was created artificially. But by whom?
Musk and other proponents of this hypothesis argue that we are wholly simulated beings – just streams of information in some kind of giant computer, like characters in a video game.
Even our brain is a simulation that responds to artificial stimuli.
In this scenario, there is no matrix from which to get out: our whole life is a matrix, outside of which existence is simply impossible.
But why should we believe in such an intricate version of our own existence?
The answer is very simple: humanity is already capable of simulating reality, and with the further development of technology, it will ultimately be able to create a perfect simulation, inhabiting which intelligent agent beings would perceive it as an absolutely real world.
We create computer simulations not only for games, but also for research purposes. Scientists simulate different situations of interaction at various levels – from subatomic particles to human communities, galaxies and even universes.
For example, computer simulation of complex animal behavior helps us understand how flocks and swarms are formed. Through simulations, we study the principles of the formation of planets, stars and galaxies.
We can simulate human communities using relatively simple agents making choices based on certain rules.
Photo author, SPL
Photo caption,
Supercomputers are becoming more powerful
Such programs simulate cooperation between people, urban development, the functioning of traffic and the state economy, and many other processes.
As the computing power of computers grows, simulations become more complex. Elements of thinking, which are still primitive, are already being built into individual programs that imitate human behavior.
Researchers believe that in the not so distant future, virtual agents will be able to make decisions based not on elementary logic from the “if … then …” category, but on simplified models of human consciousness.
Who can guarantee that soon we will not witness the creation of virtual beings, endowed with consciousness? Advances in understanding the principles of the brain, as well as the vast computational resources that the development of quantum computing promises, are steadily bringing this moment closer.
If we ever reach this stage of technology development, we will simultaneously conduct a huge number of simulations, the number of which will far exceed our only “real” world.
Is it really impossible, in this case, that some intelligent civilization somewhere in the Universe has already reached this stage?
And if so, it would be logical to assume that we just live inside such a simulation, and not in a world in which virtual realities are created – after all, the probability of this is statistically much higher.
Photo author, Science Photo Library
Photo caption,
Scientific simulation of the origin of the Universe
Philosopher Nick Bostrom from the University of Oxford has broken this scenario into three possible options:
(1) civilizations self-destruct without reaching the level of development at which it is possible creating similar simulations;
(2) civilizations that have reached this level, for some reason refuse to create such simulations;
(3) we are inside a similar simulation.
The question is which of these options appears to be the most likely.
American astrophysicist George Smoot, Nobel laureate in physics, argues that there is no compelling reason to believe in the first two options.
Undoubtedly, mankind persistently creates problems for itself – suffice it to mention global warming, growing stocks of nuclear weapons and the threat of mass extinction of species. But these problems will not necessarily lead to the destruction of our civilization.
Photo author, ANDRZEJ WOJCICKI / SCIENCE PHOTO LIBRARY
Photo caption,
Are we all part of a computer simulation?
Moreover, there is no reason why it would be fundamentally impossible to create a very realistic simulation, the characters of which would believe that they live in the real world and are free in their actions.
And given how widespread terrestrial planets are in the Universe (one of which, recently discovered, is relatively close to the Earth), it would be the height of arrogance to assume that humanity is the most developed civilization, Smoot notes.
How about option number two? In theory, humanity could refrain from conducting such simulations for ethical reasons – for example, considering it inhumane to artificially create beings who are convinced that their world is real.
But even that seems unlikely, Smoot says. After all, one of the main reasons we run simulations ourselves is because we want to learn more about our own reality. It can help us make the world a better place and possibly save lives.
So there will always be sufficient ethical justification for such experiments.
It looks like we are left with only one option: we are probably inside a simulation.
But all this is nothing more than speculation. Can we find convincing evidence for them?
Many researchers believe that it all depends on the quality of the simulation. The most logical thing would be to try to find errors in the program – like those that betrayed the artificial nature of the “real world” in the movie “The Matrix”.For example, we might find contradictions in physical laws.
Or, as the late Marvin Minsky, who pioneered artificial intelligence, suggested, there may be inherent rounding errors in approximate calculations.
Photo author, Science Photo Library
Photo caption,
We are already able to simulate entire groups of galaxies
For example, in the case when an event has several outcomes, the sum of the probabilities of their occurrence should be one.If this is not true, we can say that something is missing here.
However, according to some scientists, there are enough reasons to think that we are inside a simulation. For example, our universe looks like it was artificially constructed.
The values of the fundamental physical constants are suspiciously ideal for the origin of life in the Universe – it may seem that they were set deliberately.
Even small changes in these values would lead to a loss of stability of atoms or to the impossibility of forming stars.
Cosmology still cannot convincingly explain this phenomenon. But one possible explanation has to do with the term “multiverse”.
What if there are many universes that have arisen as a result of events similar to the Big Bang, but obeying different physical laws?
By chance, some of these universes are ideal for the origin of life, and if we were not lucky enough to be in one of them, then we would not have asked questions about the universe, because we simply would not exist.
However, the idea of the existence of parallel universes is highly speculative. So there remains at least a theoretical probability that our Universe is in fact a simulation, the parameters of which are specially set by the creators to obtain the results they are interested in – the emergence of stars, galaxies and living things.
Although this possibility cannot be ruled out, such theorizing leads us in a circle.
In the end, we can just as well assume that the parameters of the “real” Universe, in which our creators live, were artificially set by someone.In this case, the acceptance of the postulate that we are inside a simulation does not explain the riddle of the values of constant physical quantities.
Some experts point to very strange discoveries made by modern physics as evidence that something is wrong with the Universe.
Photo author, MARK GARLICK / SCIENCE PHOTO LIBRARY
Photo caption,
Is our Universe nothing more than a set of mathematical formulas?
Especially many similar discoveries were given to us by quantum mechanics – a branch of physics that deals with extremely small quantities.Thus, it turns out that both matter and energy have a granular structure.
Moreover, the “resolution” at which we can observe the Universe has its minimum limit: if you try to observe smaller objects, they simply will not look “clear” enough.
According to Smoot, these strange features of quantum physics could be signs that we are living inside a simulation – just like when you try to view an image on a screen from a very close distance, it disintegrates into individual pixels.
But this is a very crude analogy. Scientists are gradually coming to the conclusion that the “granularity” of the universe at the quantum level may be a consequence of more fundamental laws that determine the limits of cognizable reality.
Another argument in favor of the virtuality of our world says that the Universe, as it seems to a number of scientists, is described by mathematical equations.
And some physicists go even further and claim that our reality is a set of mathematical formulas.
Cosmologist Max Tegmark of the Massachusetts Institute of Technology points out that this is exactly the result one would expect if the laws of physics were based on a computational algorithm.
However, this argument threatens to lead us into a vicious circle of reasoning.
To begin with, if a superintelligence decides to simulate its own “real” world, it is logical to assume that the physical principles underlying such a simulation will reflect those that operate in its own universe – this is exactly what we do.
In this case, the true explanation of the mathematical nature of our world would not be that it is a simulation, but that the “real” world of our creators is arranged in exactly the same way.
In addition, the simulation does not have to be based on mathematical rules. You can make it function in a random, chaotic way.
Photo author, Science Photo Library
Photo caption,
The universe may be based on mathematics, some scientists believe
It is not known whether this would lead to the origin of life in a virtual universe, but the point is that you cannot draw conclusions about the degree the “reality” of the Universe, starting from its supposedly mathematical nature.
However, according to physicist James Gates of the University of Maryland, there is more compelling reason to believe that computer simulation is responsible for the laws of physics.
Gates studies matter at the level of quarks – subatomic particles that make up protons and neutrons in atomic nuclei. According to him, quarks obey rules that somewhat resemble computer codes that correct errors in data processing.
Is this possible?
Maybe so.But it is possible that such an interpretation of physical laws is only the freshest example of how humanity from time immemorial has interpreted the world around it, based on knowledge of the latest achievements of technological progress.
In the era of classical Newtonian mechanics, the universe was represented as a clockwork. And later, at the dawn of the computer era, DNA was considered as a kind of repository of a digital code with the function of storing and reading information.
Perhaps we are just extrapolating our current technological hobbies to the laws of physics every time.
It seems very difficult, if not impossible, to find convincing evidence that we are inside a simulation.
Unless a lot of errors are made in the program code, it will be difficult to create a test, the results of which could not be found in any other, more rational explanation.
Even if our world is a simulation, Smoot says, we may never find unambiguous confirmation of this – simply because such a task is beyond our mind.
After all, one of the goals of the simulation is to create characters who would function within the established rules, and not deliberately violate them.
However, there is a more serious reason why we may not need to worry too much about the fact that we are just lines of code.
Some physicists believe that the real world is exactly what it is anyway.
The terminological apparatus used to describe quantum physics is increasingly beginning to resemble a dictionary of computer science and computing.
Some physicists suspect that, at a fundamental level, nature may not be pure mathematics, but pure information: bits like computer ones and zeros.
Leading theoretical physicist John Wheeler named this conjecture “It from Bit”.
According to this hypothesis, everything that happens at the level of interactions of fundamental particles and above is a kind of computational process.
“The universe can be thought of as a giant quantum computer,” says Seth Lloyd of the Massachusetts Institute of Technology.“If we look at the” internal mechanism “of the Universe, that is, the structure of matter at the smallest possible scale, we will see [quantum] bits involved in local digital operations.”
Photo author, RICHARD KAIL / SCIENCE PHOTO LIBRARY
Caption to photo,
The quantum world is blurred and unclear to us
Thus, if reality is just information, it doesn’t matter if we are inside a simulation or not: the answer to this question does not make us more or less “real”.
Be that as it may, we simply cannot be anything but information.
Is it of fundamental importance for us whether this information was programmed by nature or some kind of superintelligence? It is unlikely – well, except that in the second case, our creators are theoretically able to intervene in the course of the simulation and even stop it altogether.
But what can we do to avoid this?
Tegmark recommends that we all lead an interesting life whenever possible, so as not to bore our creators.
Of course this is a joke. Surely any of us will have more compelling motives to live life to the fullest than the fear that otherwise we will be “erased”.
But the very formulation of the question points to certain flaws in the logic of reasoning about the reality of the Universe.
The idea that some higher-order experimenters will eventually get tired of messing with us and decide to run some other simulation sounds too anthropomorphic.
Like Kurzweil’s comment about the school experiment, it implies that our creators are just moody teenagers playing with game consoles.
The discussion of the three variants of Bostrom suffers from a similar solipsism. This is nothing more than an attempt to describe the Universe in terms of the achievements of humanity in the XXI century: “We are developing computer games. I bet that superintelligent beings would also do this, only their games would be much cooler!”
Of course, any attempt to imagine how superintelligent beings might operate will inevitably lead to an extrapolation of our own experience. But this does not change the unscientific nature of this approach.
Photo author, Science Photo Library
Photo caption,
The universe can also be represented as a quantum computer. But what will it give us?
It is probably no coincidence that many proponents of the idea of ”comprehensive simulation” admit that in their youth they read science fiction avidly.
It is possible that the choice of reading predetermined their adult interest in the problems of extraterrestrial intelligence, but it also encourages them now to clothe their reflections in the forms familiar to the genre.
They seem to be viewing space through the window of the starship “Enterprise” [from the American television series “Star Trek” – Approx. translator].
Harvard physicist Lisa Randall cannot understand the enthusiasm with which some of her colleagues are running around with the idea of reality as a total simulation. For her, this does not change anything in the approach to the perception and study of the world.
According to Randall, everything depends on our choice: what exactly is meant by the so-called reality.
It is unlikely that Elon Musk thinks all day long about the fact that the people around him, his family and friends are just constructs consisting of data streams and projected into his mind.
In part, he does not do this because he simply will not be able to constantly think in this way about the world around him.
But what we all know deep down is much more important: the only definition of reality worth our attention is our immediate sensations and experiences, and not a hypothetical world hidden “behind the scenes”.
However, the interest in what may actually be behind the world, accessible to us in sensations, is nothing new. Philosophers have asked similar questions for centuries.
Photo author, Mike Agliolo / SCIENCE PHOTO LIBRARY
Photo caption,
From our point of view, the quantum world is illogical
Even Plato believed that what we take for reality can only be shadows projected onto the cave wall.
According to Immanuel Kant, although a certain “thing-in-itself” underlying the images we perceive may exist, we are not given to know it.
The famous phrase of Rene Descartes “I think, therefore I am” means that the ability to think is the only clear criterion of existence.
The concept of “the world as a simulation” presents this old philosophical problem in a modern high-tech wrapper, and there is not much of a problem.
Like many other paradoxes of philosophy, it forces us to take a critical look at some ingrained beliefs.
But until we can convincingly prove that deliberate separation of “reality” and the experience we experience from it leads to obvious differences in our behavior or in the phenomena we observe, our understanding of reality will not change in any significant way.
In the early 18th century, the English philosopher George Berkeley argued that the world is an illusion. To which his critic, the writer Samuel Johnson, exclaimed: “Here is my refutation!” – and kicked the stone.
Actually Johnson did not refute Berkeley with this. But his answer to such claims was perhaps the most correct one possible.
Computer history: from calculator to qubits
When did the computer appear?
The exact time of the invention of computers is very difficult to determine.Their predecessors – mechanical computers, such as abacus, were invented by man long before our era. However, the term “computer” itself is much younger and appeared only in the XX century.
Along with machines with punch cards IBM 601 (1935), the first inventions of the German scientist Konrad Zuse played an important role in the history of the development of computer technology. Today, many believe that there are several of the first computers invented around the same time.
1936: Konrad Zuse and Z1
Model of the Z1 computer at the German Technical Museum in Berlin
In 1936, Konrad Zuse began developing the first programmable calculator, which was completed in 1938.The Z1 was the first binary computer and worked with punched tape. Unfortunately, the mechanical parts of the calculator were very unreliable. Replica Z1 is at the Technology Museum in Berlin.
1941: Konrad Zuse and the Z3
The
Z3 is the successor to the Z1 and the first freely programmable computer that could be used in various fields, not just for computing. Many historians believe that the Z3 is the world’s first functioning general-purpose computer.
1946: First Generation Data Processing Systems
ENIAC
In 1946, researchers Eckert and Mauchly invented the first fully electronic computer, ENIAC, the Electronic Numerical Integrator and Computer.It was used by the US Army to calculate ballistic tables. ENIAC was proficient in basic mathematical operations and could calculate square roots.
1956-1980: data processing systems of 2-5 generations
Programma 101
During these years, higher-level programming languages were developed, as well as the principles of virtual memory, the first compatible computers, databases and multiprocessor systems appeared. The world’s first freely programmable desktop computer was created by Olivetti.In 1965, the $ 3,200 Programma 101 electronic machine became available for purchase.
1970-1974: Computer Revolution
Xerox Alto
Microprocessors became cheaper, and during this period of time a lot of computers were released on the market. The leading role here was played, first of all, by Intel and Fairchild. During these years, Intel created the first microcomputer: on November 15, 1971, the 4-bit Intel 4004 processor was introduced. In 1973, the Xerox Alto was released – the first computer with a graphical user interface (monitor), mouse and integrated Ethernet card.
1976-1979: Microcomputers
Microcomputers became popular with new operating systems and floppy drives. Microsoft has established itself in the market. The first computer games and standard program names appeared. In 1978, the first 32-bit computer from DEC entered the market.
IBM 5100
IBM developed the IBM 5100, the first “portable” computer weighing 25 kilograms. It had 16 kilobytes of RAM, a 16×64 display and cost over $ 9,000.It was this high price that prevented the computer from establishing itself in the market.
1980-1984: the first “real” PC
Atari 800XL
The 1980s saw the arrival of “home computers” such as the Commodore VC20, Atari XL or Amiga computers. IBM had a major impact on future generations of PCs with the introduction of the IBM PC in 1981. The hardware class designated by IBM is still valid today: x86 processors are based on subsequent developments from the original IBM design.
In the late 1970s, there were many technical devices and manufacturers, but IBM became the dominant supplier of computer hardware.In 1980, the company released the first “real” computer – it set the direction for the development of computer technology to the present day. In 1982, IBM also introduced Word, NetWare, and other applications we know today to the market.
In 1983, the first Apple Macintosh was introduced with a focus on user-friendliness. In 1984, the serial production of PCs began in the USSR. The first domestic computer, which was put on stream, was called “AGAT”.
1985/1986: further development of computer technology
MicroVAX II
In 1985 the 520ST was released.It was an extremely powerful Atari computer for the time. In the same years, the first MicroVAX II minicomputer was released. In 1986, IBM introduced a new operating system (OS / 2) to the market.
1990: Introduction of Windows
Windows 3.0 was released on May 22, 1990, which was a big breakthrough for Microsoft in those years. In the first six months alone, about three million copies of the operating system were sold. The Internet began to be seen as a global mode of communication.
1991-1995: Windows and Linux
As a result of progress, initially very expensive computers have become more affordable.Word, Excel and PowerPoint are finally merged into an Office suite. In 1991, Finnish developer Linus Torvalds began work on Linux.
Ethernet has become the standard for data transmission in many companies. Thanks to the ability to connect computers to each other, the client-server model became more and more popular, which made it possible to work on the network.
1996-2000: The Internet Gains More Significance
During these years, software scientist Tim Berners Lee developed the HTML markup language, the HTTP transfer protocol, and the URL – Uniform Resource Locator (URI) to give each site a name and transfer content from the Web. server to browser.There have been many web editors available since 1995, allowing many people to create their own sites.
XXI century: further development
PowerMac G5
In 2003, Apple released the PowerMac G5. It was the first computer with a 64-bit processor. In 2005, Intel created the first dual-core processors.
In the following years, the main course of development was directed to the development of multi-core processors, calculations on graphics chips and also tablet computers.Since 2005, they began to take into account environmental aspects in the further development of computer technology.
The latest technology: a quantum computer
Today scientists are working on quantum computers. These machines are based on qubits. How exactly quantum computers work, we talked about in our magazine and in this article.
Read also:
Photo: wikipedia.org, pxhere.com
.