How does NextAuth.js handle environment variables. What are the key configuration options for NextAuth.js. How can you customize session management in NextAuth.js. What are the best practices for securing NextAuth.js implementations.
Essential Environment Variables for NextAuth.js
NextAuth.js relies on several crucial environment variables to function properly, especially in production environments. Understanding these variables is key to a secure and efficient implementation.
NEXTAUTH_URL: The Foundation of Your Authentication Setup
The NEXTAUTH_URL variable is fundamental when deploying your application to production. It should be set to the canonical URL of your site:
- NEXTAUTH_URL=https://example.com
For applications using a custom base path, you’ll need to specify the full route to the API endpoint:
- NEXTAUTH_URL=https://example.com/custom-route/api/auth
When using a custom base path, remember to pass the basePath page prop to the <SessionProvider> component for proper functionality.
NEXTAUTH_SECRET: Securing Your Authentication
The NEXTAUTH_SECRET variable is crucial for encrypting the NextAuth.js JWT and hashing email verification tokens. It serves as the default value for the secret option in NextAuth and Middleware.
To generate a secure secret, you can use the following OpenSSL command:
$ openssl rand -base64 32
Failing to provide a secret or NEXTAUTH_SECRET will result in an error in production environments.
NEXTAUTH_URL_INTERNAL: Optimizing Server-Side Calls
If provided, NEXTAUTH_URL_INTERNAL allows server-side calls to use an alternative URL instead of NEXTAUTH_URL. This is particularly useful in environments where the server doesn’t have access to the canonical URL of your site.
Configuring Providers in NextAuth.js
The providers option in NextAuth.js is an essential part of the configuration process. It allows you to specify an array of authentication providers for user sign-in.
Implementing Multiple Authentication Providers
NextAuth.js supports a wide range of built-in providers, including popular services like Google, Facebook, Twitter, and GitHub. You can also implement custom providers to suit your specific needs.
Here’s an example of how you might configure multiple providers:
import NextAuth from "next-auth"
import GoogleProvider from "next-auth/providers/google"
import FacebookProvider from "next-auth/providers/facebook"
export default NextAuth({
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_ID,
clientSecret: process.env.GOOGLE_SECRET,
}),
FacebookProvider({
clientId: process.env.FACEBOOK_ID,
clientSecret: process.env.FACEBOOK_SECRET,
}),
],
})
Can you mix and match different types of providers? Absolutely! NextAuth.js allows you to combine various authentication methods, giving your users flexibility in how they sign in to your application.
Advanced Session Configuration in NextAuth.js
NextAuth.js offers robust session management capabilities, allowing you to fine-tune how user sessions are handled in your application.
Customizing Session Strategy and Duration
The session object in your NextAuth.js configuration allows you to specify various options:
session: {
strategy: "database",
maxAge: 30 * 24 * 60 * 60, // 30 days
updateAge: 24 * 60 * 60, // 24 hours
generateSessionToken: () => {
return randomUUID?.() ?? randomBytes(32).toString("hex")
}
}
How does the strategy option affect session management? When set to “database”, NextAuth.js stores session information in your database, allowing for more control and potential scalability. The “jwt” strategy, on the other hand, stores session data in an encrypted JWT within a cookie.
Optimizing Session Updates
The updateAge option helps you balance security and performance by controlling how frequently session data is written to the database. Setting this value appropriately can significantly reduce database write operations while maintaining session validity.
Leveraging JSON Web Tokens in NextAuth.js
JSON Web Tokens (JWTs) are a powerful tool in NextAuth.js, offering a secure and efficient way to handle user authentication.
Configuring JWT Options
NextAuth.js allows you to customize various aspects of JWT handling:
jwt: {
maxAge: 60 * 60 * 24 * 30,
async encode() {},
async decode() {},
}
What’s the significance of the maxAge option in JWT configuration? This setting determines the lifespan of the JWT issued by NextAuth.js, after which it will no longer be considered valid.
Decoding and Verifying JWTs
NextAuth.js provides a built-in getToken() helper method for verifying and decrypting JWTs:
import { getToken } from "next-auth/jwt"
const secret = process.env.NEXTAUTH_SECRET
export async function handler(req, res) {
const token = await getToken({ req, secret })
console.log("JSON Web Token", token)
res.end()
}
This method allows you to easily access and validate the JWT payload in your server-side code.
Securing Your NextAuth.js Implementation
Security is paramount when implementing authentication in your application. NextAuth.js provides several features to enhance the security of your auth system.
Implementing CSRF Protection
Cross-Site Request Forgery (CSRF) protection is built into NextAuth.js by default. It uses the double submit cookie pattern to prevent CSRF attacks.
How does NextAuth.js implement CSRF protection? It generates a unique token for each session and includes it in both a cookie and the session data. On subsequent requests, these values are compared to ensure the request’s authenticity.
Securing Cookies and JWT
NextAuth.js automatically sets secure flags on cookies when your site is served over HTTPS. For JWTs, it uses encryption by default to protect sensitive information.
cookies: {
sessionToken: {
name: `__Secure-next-auth.session-token`,
options: {
httpOnly: true,
sameSite: 'lax',
path: '/',
secure: true
}
},
}
These settings ensure that cookies are only transmitted over secure connections and are not accessible via client-side scripts.
Optimizing NextAuth.js for Performance
While security is crucial, it’s equally important to ensure that your authentication system doesn’t become a bottleneck for your application’s performance.
Efficient Database Queries
When using a database adapter, NextAuth.js optimizes queries to minimize database load. However, you can further improve performance by indexing frequently queried fields in your database schema.
Which fields should you consider indexing? Typically, email addresses, user IDs, and session tokens are good candidates for indexing, as they are frequently used in lookup operations.
Caching Strategies
Implementing caching can significantly reduce the load on your database and improve response times. Consider using a distributed cache like Redis for session data:
import { createClient } from 'redis'
import { RedisAdapter } from "@auth/redis-adapter"
const client = createClient({
url: process.env.REDIS_URL
})
export default NextAuth({
adapter: RedisAdapter(client),
// ... other options
})
This approach can be particularly beneficial for applications with high traffic or those requiring low-latency authentication checks.
Extending NextAuth.js Functionality
NextAuth.js is designed to be flexible and extensible, allowing you to customize its behavior to meet your specific requirements.
Custom Callbacks
NextAuth.js provides several callback functions that you can use to extend its functionality:
callbacks: {
async signIn({ user, account, profile, email, credentials }) {
return true
},
async redirect({ url, baseUrl }) {
return baseUrl
},
async session({ session, user, token }) {
return session
},
async jwt({ token, user, account, profile, isNewUser }) {
return token
}
}
How can you leverage these callbacks to enhance your auth flow? The signIn callback, for instance, allows you to implement additional checks or actions when a user signs in, such as verifying user status or logging sign-in attempts.
Custom Pages
NextAuth.js allows you to create custom pages for sign-in, sign-out, and error handling:
pages: {
signIn: '/auth/signin',
signOut: '/auth/signout',
error: '/auth/error',
verifyRequest: '/auth/verify-request',
newUser: '/auth/new-user'
}
This feature enables you to maintain a consistent look and feel across your authentication pages, aligning them with your application’s design.
Integrating NextAuth.js with External Services
NextAuth.js can be seamlessly integrated with various external services to enhance its capabilities and provide additional features to your users.
OAuth Providers
NextAuth.js supports a wide range of OAuth providers out of the box. Integrating these providers can offer users more sign-in options and potentially simplify the onboarding process.
import GitHubProvider from "next-auth/providers/github"
export default NextAuth({
providers: [
GitHubProvider({
clientId: process.env.GITHUB_ID,
clientSecret: process.env.GITHUB_SECRET,
}),
],
})
What are the benefits of using OAuth providers? OAuth integration can enhance security by delegating authentication to trusted third-party services and can simplify user management by leveraging existing user accounts.
Email Providers
For applications requiring email verification or passwordless authentication, NextAuth.js can be integrated with email service providers:
import EmailProvider from "next-auth/providers/email"
export default NextAuth({
providers: [
EmailProvider({
server: process.env.EMAIL_SERVER,
from: process.env.EMAIL_FROM,
}),
],
})
This setup allows you to implement features like magic links or one-time passwords sent via email.
NextAuth.js provides a robust and flexible authentication solution for Next.js applications. By understanding and leveraging its various configuration options, security features, and extension points, you can create a secure, performant, and user-friendly authentication system tailored to your specific needs. Remember to always prioritize security, especially when dealing with sensitive user data, and stay updated with the latest best practices in web authentication.
Options | NextAuth.js
Environment Variables
NEXTAUTH_URL
When deploying to production, set the NEXTAUTH_URL
environment variable to the canonical URL of your site.
NEXTAUTH_URL=https://example.com
If your Next.js application uses a custom base path, specify the route to the API endpoint in full. More information about the usage of custom base path here.
e.g. NEXTAUTH_URL=https://example.com/custom-route/api/auth
When you’re using a custom base path, you will need to pass the basePath
page prop to the <SessionProvider>
. More information here.
Using System Environment Variables we automatically detect when you deploy to Vercel so you don’t have to define this variable. Make sure Automatically expose System Environment Variables is checked in your Project Settings.
NEXTAUTH_SECRET
Used to encrypt the NextAuth.js JWT, and to hash email verification tokens. This is the default value for the secret
option in NextAuth and Middleware.
NEXTAUTH_URL_INTERNAL
If provided, server-side calls will use this instead of NEXTAUTH_URL
. Useful in environments when the server doesn’t have access to the canonical URL of your site. Defaults to NEXTAUTH_URL
.
NEXTAUTH_URL_INTERNAL=http://10.240.8.16
Options
Options are passed to NextAuth.js when initializing it in an API route.
providers
- Default value:
[]
- Required: Yes
Description
An array of authentication providers for signing in (e.g. Google, Facebook, Twitter, GitHub, Email, etc) in any order. This can be one of the built-in providers or an object with a custom provider.
See the providers documentation for a list of supported providers and how to use them.
secret
- Default value:
string
(SHA hash of the “options” object) in development, no default in production. - Required: Yes, in production!
Description
A random string is used to hash tokens, sign/encrypt cookies and generate cryptographic keys.
If you set NEXTAUTH_SECRET
as an environment variable, you don’t have to define this option.
If no value is specified in development (and there is no NEXTAUTH_SECRET
variable either), it uses a hash for all configuration options, including OAuth Client ID / Secrets for entropy.
danger
Not providing any secret
or NEXTAUTH_SECRET
will throw an error in production.
You can quickly create a good value on the command line via this openssl
command.
$ openssl rand -base64 32
If you rely on the default secret generation in development, you might notice JWT decryption errors, since the secret changes whenever you change your configuration. Defining an explicit secret will make this problem go away. We will likely make this option mandatory, even in development, in the future.
session
- Default value:
object
- Required: No
Description
The session
object and all properties on it are optional.
Default values for this option are shown below:
session: {
// Choose how you want to save the user session.
// The default is `"jwt"`, an encrypted JWT (JWE) stored in the session cookie.
// If you use an `adapter` however, we default it to `"database"` instead.
// You can still force a JWT session by explicitly defining `"jwt"`.
// When using `"database"`, the session cookie will only contain a `sessionToken` value,
// which is used to look up the session in the database.
strategy: "database",// Seconds - How long until an idle session expires and is no longer valid.
maxAge: 30 * 24 * 60 * 60, // 30 days// Seconds - Throttle how frequently to write to database to extend a session.
// Use it to limit write operations. Set to 0 to always update the database.
// Note: This option is ignored if using JSON Web Tokens
updateAge: 24 * 60 * 60, // 24 hours// The session token is usually either a random UUID or string, however if you
// need a more customized session token string, you can define your own generate function.
generateSessionToken: () => {
return randomUUID?.() ?? randomBytes(32).toString("hex")
}
}
jwt
- Default value:
object
- Required: No
Description
JSON Web Tokens can be used for session tokens if enabled with session: { strategy: "jwt" }
option. JSON Web Tokens are enabled by default if you have not specified an adapter. JSON Web Tokens are encrypted (JWE) by default. We recommend you keep this behaviour. See the Override JWT encode
and decode
methods advanced option.
JSON Web Token Options
jwt: {
// The maximum age of the NextAuth.js issued JWT in seconds.
// Defaults to `session.maxAge`.
maxAge: 60 * 60 * 24 * 30,
// You can define your own encode/decode functions for signing and encryption
async encode() {},
async decode() {},
}
An example JSON Web Token contains a payload like this:
{
name: 'Iain Collins',
email: '[email protected]',
picture: 'https://example.com/image.jpg',
iat: 1594601838,
exp: 1597193838
}
JWT Helper
You can use the built-in getToken()
helper method to verify and decrypt the token, like this:
import { getToken } from "next-auth/jwt"const secret = process.env.NEXTAUTH_SECRET
export default async function handler(req, res) {
// if using `NEXTAUTH_SECRET` env variable, we detect it, and you won't actually need to `secret`
// const token = await getToken({ req })
const token = await getToken({ req, secret })
console. log("JSON Web Token", token)
res.end()
}
For convenience, this helper function is also able to read and decode tokens passed from the Authorization: 'Bearer token'
HTTP header.
Required
The getToken() helper requires the following options:
req
– (object) Request objectsecret
– (string) JWT Secret. UseNEXTAUTH_SECRET
instead.
You must also pass any options configured on the jwt
option to the helper.
e.g. Including custom session maxAge
and custom signing and/or encryption keys or options
Optional
It also supports the following options:
secureCookie
– (boolean) Use secure prefixed cookie nameBy default, the helper function will attempt to determine if it should use the secure prefixed cookie (e.g.
true
in production andfalse
in development, unless NEXTAUTH_URL contains an HTTPS URL).cookieName
– (string) Session token cookie nameThe
secureCookie
option is ignored ifcookieName
is explicitly specified.raw
– (boolean) Get raw token (not decoded)If set to
true
returns the raw token without decrypting or verifying it.
The JWT is stored in the Session Token cookie, the same cookie used for tokens with database sessions.
pages
- Default value:
{}
- Required: No
Description
Specify URLs to be used if you want to create custom sign in, sign out and error pages.
Pages specified will override the corresponding built-in page.
For example:
pages: {
signIn: '/auth/signin',
signOut: '/auth/signout',
error: '/auth/error', // Error code passed in query string as ?error=
verifyRequest: '/auth/verify-request', // (used for check email message)
newUser: '/auth/new-user' // New users will be directed here on first sign in (leave the property out if not of interest)
}
When using this configuration, ensure that these pages actually exist. For example error: '/auth/error'
refers to a page file at pages/auth/error.js
.
See the documentation for the pages option for more information.
callbacks
- Default value:
object
- Required: No
Description
Callbacks are asynchronous functions you can use to control what happens when an action is performed.
Callbacks are extremely powerful, especially in scenarios involving JSON Web Tokens as they allow you to implement access controls without a database and to integrate with external databases or APIs.
You can specify a handler for any of the callbacks below.
callbacks: {
async signIn({ user, account, profile, email, credentials }) {
return true
},
async redirect({ url, baseUrl }) {
return baseUrl
},
async session({ session, token, user }) {
return session
},
async jwt({ token, user, account, profile, isNewUser }) {
return token
}
}
See the callbacks documentation for more information on how to use the callback functions.
events
- Default value:
object
- Required: No
Description
Events are asynchronous functions that do not return a response, they are useful for audit logging.
You can specify a handler for any of these events below – e.g. for debugging or to create an audit log.
The content of the message object varies depending on the flow (e.g. OAuth or Email authentication flow, JWT or database sessions, etc). See the events documentation for more information on the form of each message object and how to use the events functions.
events: {
async signIn(message) { /* on successful sign in */ },
async signOut(message) { /* on signout */ },
async createUser(message) { /* user created */ },
async updateUser(message) { /* user updated - e.g. their email was verified */ },
async linkAccount(message) { /* account (e.g. Twitter) linked to a user */ },
async session(message) { /* session is active */ },
}
adapter
- Default value: none
- Required: No
Description
By default NextAuth. js does not include an adapter any longer. If you would like to persist user / account data, please install one of the many available adapters. More information can be found in the adapter documentation.
debug
- Default value:
false
- Required: No
Description
Set debug to true
to enable debug messages for authentication and database operations.
logger
- Default value:
console
- Required: No
Description
Override any of the logger levels (undefined
levels will use the built-in logger), and intercept logs in NextAuth.js. You can use this to send NextAuth.js logs to a third-party logging service.
The code
parameter for error
and warn
are explained in the Warnings and Errors pages respectively.
Example:
/pages/api/auth/[. ..nextauth].js
import log from "logging-service"export default NextAuth({
...
logger: {
error(code, metadata) {
log.error(code, metadata)
},
warn(code) {
log.warn(code)
},
debug(code, metadata) {
log.debug(code, metadata)
}
}
...
})
If the debug
level is defined by the user, it will be called regardless of the debug: false
option.
theme
- Default value:
object
- Required: No
Description
Changes the color scheme theme of pages as well as allows some minor customization. Set theme.colorScheme
to "light"
, if you want to force pages to always be light. Set to "dark"
, if you want to force pages to always be dark. Set to "auto"
, (or leave this option out) if you want the pages to follow the preferred system theme. (Uses the prefers-color-scheme media query. )
In addition, you can define a logo URL in theme.logo
which will be rendered above the main card in the default signin/signout/error/verify-request pages, as well as a theme.brandColor
which will affect the accent color of these pages.
The sign-in button’s background color will match the brandColor
and defaults to "#346df1"
. The text color is #fff
by default, but if your brand color gives a weak contrast, correct it with the buttonText
color option.
theme: {
colorScheme: "auto", // "auto" | "dark" | "light"
brandColor: "", // Hex color code
logo: "", // Absolute URL to image
buttonText: "" // Hex color code
}
Advanced Options
Advanced options are passed the same way as basic options, but may have complex implications or side effects. You should try to avoid using advanced options unless you are very comfortable using them.
useSecureCookies
- Default value:
true
for HTTPS sites /false
for HTTP sites - Required: No
Description
When set to true
(the default for all site URLs that start with https://
) then all cookies set by NextAuth. js will only be accessible from HTTPS URLs.
This option defaults to false
on URLs that start with http://
(e.g. http://localhost:3000
) for developer convenience.
Properties on any custom cookies
that are specified override this option.
danger
Setting this option to false in production is a security risk and may allow sessions to be hijacked if used in production. It is intended to support development and testing. Using this option is not recommended.
cookies
- Default value:
{}
- Required: No
Description
Cookies in NextAuth.js are chunked by default, meaning that once they reach the 4kb limit, we will create a new cookie with the .{number}
suffix and reassemble the cookies in the correct order when parsing / reading them. This was introduced to avoid size constraints which can occur when users want to store additional data in their sessionToken, for example.
You can override the default cookie names and options for any of the cookies used by NextAuth.js.
This is an advanced option and using it is not recommended as you may break authentication or introduce security flaws into your application.
You can specify one or more cookies with custom properties, but if you specify custom options for a cookie you must provide all the options for that cookie.
If you use this feature, you will likely want to create conditional behaviour to support setting different cookies policies in development and production builds, as you will be opting out of the built-in dynamic policy.
An example of a use case for this option is to support sharing session tokens across subdomains.
Example
cookies: {
sessionToken: {
name: `__Secure-next-auth.session-token`,
options: {
httpOnly: true,
sameSite: 'lax',
path: '/',
secure: true
}
},
callbackUrl: {
name: `__Secure-next-auth.callback-url`,
options: {
sameSite: 'lax',
path: '/',
secure: true
}
},
csrfToken: {
name: `__Host-next-auth. csrf-token`,
options: {
httpOnly: true,
sameSite: 'lax',
path: '/',
secure: true
}
},
pkceCodeVerifier: {
name: `${cookiePrefix}next-auth.pkce.code_verifier`,
options: {
httpOnly: true,
sameSite: 'lax',
path: '/',
secure: useSecureCookies,
maxAge: 900
}
},
state: {
name: `${cookiePrefix}next-auth.state`,
options: {
httpOnly: true,
sameSite: "lax",
path: "/",
secure: useSecureCookies,
maxAge: 900
},
},
nonce: {
name: `${cookiePrefix}next-auth.nonce`,
options: {
httpOnly: true,
sameSite: "lax",
path: "/",
secure: useSecureCookies,
},
},
}
danger
Using a custom cookie policy may introduce security flaws into your application and is intended as an option for advanced users who understand the implications. Using this option is not recommended.
Override JWT
encode
and decode
methods
NextAuth. js uses encrypted JSON Web Tokens (JWE) by default. Unless you have a good reason, we recommend keeping this behaviour. Although you can override this using the encode
and decode
methods. Both methods must be defined at the same time.
IMPORTANT: If you use middleware to protect routes, make sure the same method is also set in the _middleware.ts
options
jwt: {
async encode(params: {
token: JWT
secret: string
maxAge: number
}): Promise<string> {
// return a custom encoded JWT string
return "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c"
},
async decode(params: {
token: string
secret: string
}): Promise<JWT | null> {
// return a `JWT` object, or `null` if decoding failed
return {}
},
}
Types Overview – Pydantic
Where possible Pydantic uses standard library types to define fields, thus smoothing
the learning curve. For many useful applications, however, no standard library type exists,
so Pydantic implements many commonly used types.
There are also more complex types that can be found in the Pydantic Extra Types.
If no existing type suits your purpose you can also implement your own Pydantic-compatible types with custom properties and validation.
The following sections describe the types supported by Pydantic.
- Standard Library Types — types from the Python standard library.
- Booleans —
bool
types. - ByteSize — a type that allows handling byte string representations in your model.
- Callables —
Callable
types. - Datetimes —
datetime
,date
,time
, andtimedelta
types. - Dicts and Mapping Types —
dict
types and mapping types. - Enums and Choices — uses Python’s standard
enum
classes to define choices. - File Types — types for handling files and paths.
- JSON — a type that allows you to store JSON data in your model.
- Lists and Tuples —
list
andtuple
types. - Number Types —
int
,float
,Decimal
, and other number types. - Secret Types — types for storing sensitive information that you do not want to be visible in logging or tracebacks.
- Sequence, Iterable, & Iterator — iterable types including
Sequence
,Iterable
, andIterator
. - Sets and frozenset —
set
andfrozenset
types. - Strict Types — types that enable you to prevent coercion from compatible types.
- String Types —
str
types. - Type and TypeVar —
Type
andTypeVar
types. - Types with Fields — types that allow you to define fields.
- Unions — allows a model attribute to accept different types.
- URLs — URI/URL validation types.
- UUIDs — types that allow you to store UUIDs in your model.
- Base64 and other encodings — types that allow serializing values into an encoded form, e.g.
base64
. - Custom Data Types — create your own custom data types.
- Field Type Conversions — strict and lax conversion between different field types.
- Extra Types: Types that can be found in the optional Pydantic Extra Types package. These include:
- Color Types — types that enable you to store RGB color values in your model.
- Payment Card Numbers — types that enable you to store payment cards such as debit or credit cards.
- Phone Numbers — types that enable you to store phone numbers in your model.
- Routing Numbers — types that enable you to store ABA routing transit numbers in your model.
Using custom functions in Power Query – Power Query
- Article
If you find yourself in a situation where you need to apply the same set of transformations to different queries or values, creating a custom Power Query function that can be reused as many times as needed can be helpful. A custom Power Query Function is a mapping of a set of input values to a single output value and is generated from native M.9 functions and operators0013
Although you can manually create your own Power Query custom function using code, as shown in Power Query M Functions Overview, the Power Query user interface provides functions to speed up, simplify, and improve the process of creating and managing a custom function.
This article explains this interface, which is only available through the Power Query UI, and how to get the most out of it.
Important!
This article describes how to create a custom function with Power Query using the common transformations available in the Power Query user interface. It covers the basic concepts of creating custom functions, as well as links to additional articles in the Power Query documentation for more information about the specific transformations that are mentioned in this article.
Creating a custom function based on a reference to table
Note
The following example was created using the desktop found in Power BI Desktop and can also be used using the Excel for Windows Power Query interface.
You can follow this example by downloading the sample files used in this article from the following download link. For simplicity, this article will use the folder connector. For more information about the folder connector, see Folder. The purpose of this example is to create a custom function that can be applied to all files in this folder before merging all data from all files into one table.
Start by using the folder connector to navigate to the folder where the files are located and select Convert Data or Edit . You will get Power Query experience. Right-click the value Binary value in the Content field and select the Add as new request option. In this example, you will see that the first file from the list is selected, which is file april 2019.csv .
This option actually creates a new query with a navigation step directly to this file as a binary file, and the name of this new query will be the path to the selected file. Rename this query sample file .
Create a new parameter named File Parameter . Use the sample file request as for the current value of , as shown in the following figure.
note
We recommend that you read the article Parameters to better understand how to create and manage parameters in Power Query.
Custom functions can be created using any type of parameters. It is not required for any custom function to have a binary file as a parameter.
The binary parameter type is displayed in the Type drop-down menu of the Parameters dialog box only if there is a query that calculates a binary file.
A custom function can be created without a parameter. This is commonly found in scenarios where input may be taken from the environment in which the function is called. For example, a function that takes the current Wednesday date and time and creates a specific text string from those values.
Right-click File option in area Requests . Select parameter Ref. .
Rename the generated query from File Parameter (2) to Transform Sample file.
Right-click this new query to convert sample file and select option Create function .
This operation will actually create a new function that will be associated with the request transform sample file . Any changes made to request “Transform Sample File” will be automatically replicated to the user-defined function. When creating this new function as function name use conversion file .
After creating the function, you will notice that a new group will be created with the name of the function. This new group will contain:
- All parameters referenced in request to transform sample file .
- Request to convert example file , commonly referred to as example request .
- Function just created, in this case file conversion .
Apply transformations to sample query
After creating a new function, select the query named Transform sample file . This request is now linked to function Transform file , so any changes made to this request will be reflected in the function. This is what is called the notion of a sample request associated with a function.
The first transformation that needs to happen with this query is the transformation that the binary interprets. You can right click the binary in the preview area and select option CSV to interpret the binary as a CSV file.
The format of all CSV files in a folder is the same. They all have a heading spanning the first four lines. The column headings are in the fifth row and the data starts from the sixth row down, as shown in the following figure.
The next set of conversion steps to apply to the conversion example file :
Remove the first four lines0032 . This action will get rid of the lines that are considered part of the header section of the file.
Note
For more information about how to delete rows or filter a table by row position, see Filter by Row Position.
Upgrading headings . The headers for the summary table are now in the first row of the table. They can be upgraded as shown in the following figure.
Power Query will automatically add a new step Modified type by default after raising the column headings, which will automatically determine the data types for each column. Request to convert sample file will look like the following image.
Note
For more information about promoting and demoting headers, see Promote or demote column headers.
Attention!
Function Transform file depends on the actions performed in request Transform sample file . However, if you try to manually change the code for the transform file function , you will be met with a warning that reads The definition of the function 'Transform file' is updated whenever query 'Transform Sample file' is updated. However, updates will stop if you directly modify function 'Transform file'.
Calling the custom function as a new column
After creating the custom function and including all the transformation steps, you can return to the original query where you have a list of files from a folder. On the Add Column tab on the ribbon, select Custom Function Call in the General group. In the Custom Function Call window, enter Output Table as the new column name 0032 . Select the function name Transform file from the drop-down list Function request . After you select a function, the drop-down menu will display the option for the function, and you can select a column from the table to use as an argument for that function. Select column Content as the value or argument to be passed for the file parameter.
After pressing the button OK a new column named 9 will be created0031 Output table . This column contains the values of Table in cells, as shown in the following figure. For simplicity, remove all columns from this table except of the Name table and Output Table .
Note
For more information about selecting or removing columns from a table, see Selecting or removing columns.
The function was applied to each row of the table using the values from column The content of as an argument to the function. Now that the data has been converted to the desired shape, you can expand column of Output Table as shown in the figure below without using a prefix for the expanded columns.
You can check for data from all files in a folder by checking the values in column Name or Date . In this case, you can validate the value from column Date because each file only contains data for one month of the given year. If you see multiple files, it means that you have successfully combined data from multiple files into one table.
Note
What you have read so far is essentially the same process that occurs during file merging , but is done manually.
We also recommend that you review File Merge Overview and CSV Merge to understand how Power Query file merge works and the role of custom functions.
Adding a new parameter to an existing custom function
Imagine there is a new requirement based on what you created. The new requirement requires that before merging the files, filter the data within them to get only rows where the country is Panama .
To fulfill this requirement, create a new parameter named Market with a data type of text. In the field Current value enter the value Panama .
Use this new option to select query Convert sample file and filter field Country using the value from parameter Market .
Note
For more information about filtering columns by values, see Filtering Values.
Applying this new step to a query will automatically update the Transform file function to require two parameters based on the two parameters used by in Transform Sample .
But there is a warning sign next to the request for CSV files . Now that the function has been updated, it requires two parameters. So the step where you call the function results in errors because only one of the arguments was passed to function Transform file at step Custom function called .
To resolve errors, double-click the user-defined function to be called in area Actions taken to open a window Call a custom function . In the Market parameter , manually enter the value Panama .
You can now test the query to make sure that only rows , in which the country is Panama , are displayed in the final result set of query CSV files .
Creating a custom function from reusable pieces of logic
If you have multiple queries or values that require the same set of transformations, you can create a custom function that acts as reusable pieces of logic. Later, this user-defined function can be called on queries or values of your choice. This custom feature can save you time and help you manage your transformation set in a central location that you can change at any time.
For example, imagine a query with multiple codes as a text string and you want to create a function that will decode those values, as shown in the following example table:
code |
---|
PTY-CM1090-LAX |
LAX-CM701-PTY |
PTY-CM4441-MIA |
MIA-UA1257-LAX |
LAX-XY2842-MIA |
First you need to use a parameter with a value that serves as an example. In this case it will be PTY-CM1090-LAX .
Based on this parameter, a new query is created that applies the necessary transformations. In this case, you need to split the code PTY-CM1090-LAX for multiple components:
- Source = PTY
- Purpose = LAX
- Airline = CM
- FlightID = 1090
Below is the M code for this set of transformations.
let source=code, SplitValues = Text.Split( Source, "-"), CreateRow = [Origin= SplitValues{0}, Destination= SplitValues{2}, Airline=Text.Start( SplitValues{1},2), FlightID= Text.End( SplitValues{1}, Text.Length( SplitValues{1} ) - 2) ], RowToTable = Table.FromRecords( { CreateRow } ), #"Changed Type" = Table.TransformColumnTypes(RowToTable,{{"Origin", type text}, {"Destination", type text}, {"Airline", type text}, {"FlightID", type text}}) in #"Changed Type"
Note
For more information about the Power Query M Formula Language, see Power Query M Formula Language. Finally, you can call the user-defined function in any of the queries or values, as shown in the following figure.
After a few conversions, you’ll see that you’ve reached the desired output and used the logic for that conversion from a user-defined function.
Chapter 5. What you need to know about squeeze
Chapter 5. What you need to know about squeeze
Contents
- 5. 1. Possible problems
- 5.1.1. Switching from IDE to PATA subsystem in disk drivers
- 5.1.2. Due to a change in mdadm metadata format, the latest version of Grub
- 5.1.3 is required. Xen update
- 5.1.4. The pam_userdb.so library is not compatible with the latest libdb
- 5.1.5. Potential issues due to changing /bin/sh
- 5.1.6. Change in kernel policy regarding resource conflicts
- 5.2. LDAP support
- 5.3. Service
sieve
moved to IANA dedicated port - 5.4. Security status of web browsers
- 5.5. KDE desktop
- 5.5.1. Update from KDE 3
- 5.5.2. New KDE Metapackages
- 5.6. GNOME desktop support and changes
- 5.6.1. GDM 2.20 and 2.30
- 5.6.2. Devices and other administrative rights
- 5.6.3. Interaction between network-manager and ifupdown
- 5.7. Changes in the graphics stack
- 5. 7.1. Outdated Xorg drivers
- 5.7.2. Setting the kernel mode
- 5.7.3. Connecting input devices on the fly
- 5.7.4. “Switching” the X server
- 5.8. Changing the web path munin
- 5.9. shorewall update instructions
5.1. Possible problems
Sometimes changes made in a new release have side effects,
which cannot be avoided without introducing errors somewhere else. This section
describes problems that are already known to us. Read also the list
known bugs, relevant package documentation, bug reports
and other information listed in Section 6.1, “More Reading”.
5.1.1. Transition in disk drivers from IDE to PATA subsystem
The new version of the Linux kernel provides different drivers for some
PATA (IDE) controllers. The names of some hard drives, CD-ROMs or
tape devices may change.
It is now recommended that configuration files refer to disk devices not
by device names, but by label or UUID (unique identifier), so
how it will work with both old and new kernel versions. At
update to the squeeze version of Debian kernel packages, package linux-base
will automatically offer
perform this conversion in most configuration files related to
package file system, including the various loaders included in
Debian. If you do not choose to automatically update system settings,
or if you are not using Debian kernel packages, you must update the ID
devices on their own before the next restart of the system, for
ensure that the system remains bootable.
5.1.2. Due to mdadm metadata format change, latest version of Grub is required
The following applies only to users who wish the system
bootloader grub-pc
was loading the kernel
directly from a RAID device created with
mdadm
3.x and default values,
or when the metadata version is explicitly set using
-e
. In particular, this applies to arrays created in
during the installation of Debian squeeze or later. This does not apply
arrays created by older versions of mdadm, and if
RAID created with command line option -e
.
0. 9
Versions grub-pc
older than
1.98+20100720-1 are unable to boot the system directly from
RAID with metadata formats 1.x (new default
1.2). To be sure that the system will boot, make sure you
using grub-pc
1.98+20100720-1
or later versions provided by Debian
squeeze. An unbootable system can be restored with
super grub2
disk or grml.
5.1.3. Xen update
If Xen was installed in lenny, then by default the kernel,
downloaded by GRUB Legacy provided the Xen hypervisor and dom0 support. This
behavior changed in GRUB 2 from squeeze: will boot by default
the kernel is not Xen. If you need Xen and it needs to be loaded by default, then
read the hints at http://wiki.debian.org/Xen#Installationandconfiguration on
changing settings.
Upgrading lenny will not install Xen 4.0. You yourself
must install package xen-linux-system-2.6-xen-amd64
or xen-linux-system-2.6-xen-686
, choosing what you want
needed: Xen hypervisor and a suitable dom0 kernel; this will make it easier to
updates in the future.
The 2.6.32 Xen kernel from squeeze uses pvops instead of a patch
Xenlinux. This means that squeeze in your domU will not allow
use (for example) sda1
as device name for
hard drive, since this naming scheme is not available with pvops. Instead of
this you should use (for the same example)
xvda1
, which is compatible with old and new Xen kernels.
5.1.4. The pam_userdb.so library is not compatible with the latest libdb
Some Berkeley Database version 7 files created with libdb3 do not
can be read by newer versions of libdb (see post about
bug #521860). As a workaround
path, files can be recreated db4.8_load from package
db4.8-util
.
5.1.5. Potential problems due to changing /bin/sh
If you previously added local change /bin/sh
, or
changed symbolic link /bin/sh
pointing to something
other than /bin/bash
, you may encounter
problems updating packages dash
or bash
. Please note that this is also true for
changes made by other packages (e.g. mksh
) by allowing them to become system
default shell by redirecting /bin/sh
.
If you encounter similar problems, remove the local change
(local diversion) and make sure the symlink to
/bin/sh
and its manual page points to files
provided by package bash
and then
run the command dpkg-reconfigure –force dash .
dpkg-divert --remove /bin/sh dpkg-divert --remove /usr/share/man/man1/sh.1.gz ln -sf bash /bin/sh ln -sf bash.1.gz /usr/share/man/man1/sh.1.gz
5.1.6. Change in kernel policy regarding resource conflicts
Changed the default value for the acpi_enforce_resources parameter in the kernel
Linux, now it is ” strict
“. It may
cause some existing drivers for older sensors to
do not access the sensor hardware. One of the roundabouts
paths is to add
“ acpi_enforce_resources=lax
” to command
kernel line.
5.2. LDAP support
Feature in cryptographic libraries used in libraries
LDAP, causes programs using
LDAP and trying to change their effective privileges don’t
can connect to the LDAP server using
TLS or SSL. This may lead to
problems with suid programs on systems using libnss-ldap
, like sudo ,
su or schroot and with suid programs,
which perform LDAP lookups like sudo-ldap
.
It is recommended to replace package libnss-ldap
with libnss-ldapd
, a newer library that
uses a separate service ( nslcd ) for all searches
LDAP. The replacement for libpam-ldap
is libpam-ldapd
.
Note that libnss-ldapd
recommends the NSS caching service ( nscd
), whose suitability in your environment you
needs to be evaluated prior to installation. As an alternative to nscd
, consider unscd
.
See #566351 and #545414 for more information.
5.3. Service
sieve
moved to IANA dedicated port
For ManageSieve, IANA has allocated port 4190/tcp, and the old port used by
timsieved and other software for
filter management in many distributions (2000/tcp), given to
using Cisco SCCP (according to the IANA registry).
Starting from version 4.38 of Debian package netbase
, service sieve
will be
moved from port 2000 to port 4190 in the file
/etc/services
.
Any installations using the service name sieve
instead of
numeric port number will be switched to the new port number immediately after
their restart or reboot, and in some cases, directly
after update /etc/services
.
This will affect Cyrus IMAP. It may also affect other software
sieve-enabled software, such as on DoveCot.
To avoid downtime issues, mail cluster administrators using
Debian, it is highly recommended to check their Cyrus installations (and,
possibly also DoveCot) and take steps to avoid moving services
from port 2000/tcp to port 4190/tcp, unexpected by other servers, or
clients.
It is worth noting that:
If you have never made changes to a file
/etc/services
, it will just automatically
updated. Otherwise, dpkg will show you a prompt asking about
changes.Optionally, you can edit
/etc/services
and
change portsieve
back to 2000 (even though it doesn’t
recommended to do).You can change
/etc/cyrus.conf
and any other from
relevant settings files for your mail/email cluster
mail (for example, via sieve web interfaces) ahead of time for forced
giving them a static port number.You can configure cyrus master to listen on both ports (2000 and 4190)
at the same time, thus avoiding the problem entirely. It also contributes
smoother transition from port 2000 to port 4190.
5.4. Security status of web browsers
Debian 6.0 includes several browser mechanisms (browser
engines), for which a large number of
vulnerabilities. Because of this and the partial lack of author support
previous versions have great difficulty porting fixes
security in older versions. Also due to library dependencies
unable to upgrade to new versions. Because of this, browsers
based on the qtwebkit and khtml mechanisms are included in Squeeze but not
have full security support. We will make every effort to
tracking and transfer of security patches, but, nevertheless, the data
It is better not to use browsers to view untrusted sites.
For everyday work, we recommend using browsers based on
Mozilla xulrunner mechanism (Iceweasel and Iceape), as well as browsers based on
Webkit engine (e.g. Epiphany) or Chromium. In previous releases
change of several versions of xulrunner was successfully ported to stable
version.
Chromium – compiled on the basis of Webkit – is separated into a separate package,
those. if porting the new version becomes impossible, it can be upgraded to
the latest author’s version (which is impossible for the webkit library itself).
One version of webkit has long-term author support.
5.5. Desktop KDE
Squeeze is the first Debian release to contain full support for
the next generation of KDE based on Qt 4. Most of the official
KDE applications are version 4.4.5, except for kdepim
which is version 4.4.7. More
You can read more about the changes in the KDE project post.
5.5.1. Update from KDE 3
The KDE 3 desktop environment is not supported in Debian 6.0. It
will be automatically replaced by the new 4.4 series when updating. Since it
significant change, users need to follow some precautions
precautions to ensure the smoothest possible process
updates.
Important | |
---|---|
It is not recommended to update while there are active KDE 3 sessions in |
When logging in to the upgraded system for the first time, existing users will be
a Debian-KDE migration wizard, called kaboom
, has been proposed to help migrate personal data
user and, if necessary, back up old
KDE settings. For more information see the homepage
Kaboom.
Although the desktop environment based on KDE 3 is no longer
supported, users can still install and use
standalone KDE 3 applications, since core libraries and KDE binaries
3 ( kdelibs
) and Qt 3 are still available
in Debian 6.0. However, please note that these applications may not
be well integrated into the new environment. Moreover, neither KDE 3 nor Qt 3
in any form will not be supported in the next release of Debian, so if
you use them, it is highly recommended to migrate your software
software for the new platform.
5.5.2. New KDE metapackages
As noted earlier, Debian 6.0 provides a new set of related
KDE metapackages:
It is highly recommended to install kde-standard package
for normal use
desktop.kde-standard
will be
default pull KDE Plasma
Desktop and a select set of most used applications.If you want a minimal desktop, you can install the package
kde-plasma-desktop
and manually select
applications that are needed. This is roughly equivalent to packagekde-minimal
contained in Debian
5.0.For smaller devices, there is an alternative environment called
KDE Plasma
Netbook which can be installed with packagekde-plasma-netbook
. Plasma Netbook and Plasma
Desktop can be installed on the same system at the same time and the values by
default can be changed in System Preferences (replacing the former
KControl).If you want the complete set of official KDE applications, then you can
install packagekde-full
. By
it will install KDE Plasma Desktop by default.
5.6. GNOME desktop support and changes
In squeeze, a lot has changed in the GNOME desktop environment by
compared to the version from lenny. More information you
can be found in the release notes
GNOME 2.30. Specific problems are listed below.
5.6.1. GDM 2.20 and 2.30
Systems upgraded from lenny keep the old version of GNOME
Display Manager (GDM) - 2.20. This version will still be supported
throughout the entire squeeze cycle, but this will be the last release that performs
such support. Newly installed systems will instead contain
GDM 2.30 provided by package gdm3
. Due to incompatibility between both
versions, this update is not automatic, but is recommended
install gdm3
after upgrading to
squeeze. This can be done from the console, or from just one
open GNOME session. Please note that GDM 2.20 9 settings0031 not will be moved. For standard system
desktop, however, a simple installation of gdm3
should suffice.
5.6.2. Devices and other administrative rights
Special device permissions are automatically granted to the user when
login: video and audio devices, network roaming, management
power supply, device mounting. Groups cdrom, floppy, audio, video,
plugdev and powerdev are no longer considered useful. More about it
read documentation consolekit
.
Most graphical programs that require administrator rights are now for
that rely on PolicyKit,
instead of gksu
. Recommended path
obtaining administrative rights - adding it to the group
sudo
.
5.6.3. Interaction between network-manager and ifupdown
When updating package network-manager
configured in
/etc/network/interfaces
interfaces using
DHCP without other options will be disabled in this file,
and are instead intercepted by NetworkManager. Therefore the commands
ifup and ifdown do not work
can. Instead, interfaces can be manipulated using a shell
NetworkManager, more details in the documentation
network manager.
Conversely, any interfaces configured in
/etc/network/interfaces
with more
parameters will be ignored by NetworkManager. This applies, in particular,
to the wireless interfaces used when installing Debian (see
bug #606268).
5.7. Graphics Stack Changes
There have been some changes to the X stack in Debian 6.0. This section
contains the most important and visible to the user.
5.7.1. Legacy Xorg drivers
Drivers Xorg cyrix
, imstt
,
sunbw2
and vga
no longer
are provided. Instead, users must switch to generic drivers,
such as vesa
or fbdev
.
The old driver via
is no longer supported and will
replaced by driver openchrome
which will
be used automatically after the update.
nv
and radeonhd
drivers still
present in the release, but deprecated. Instead, users
should consider drivers nouveau
and
radeon
, respectively.
Support for X input device drivers calcomp
,
citron
, digitaledge
,
dmc
, dynapro
,
elo2300
, fpit
,
hyperpen
, jamstudio
,
magellan
, microtouch
,
mutouch
, palmax
,
spaceorb
, summa
,
tek4957
and ur98
discontinued and not
included in this release. Users of these devices may need to
change to the appropriate kernel driver and to the evdev X driver. For many
serial devices utility
inputattach allows you to connect them to an input device
Linux that can be recognized by X driver evdev
.
5.7.2. Kernel mode setting
Kernel drivers for Intel (since i830), ATI/AMD (from original Radeon to
Radeon HD 5xxx "Evergreen" series) and NVIDIA graphics chipsets
now support native customization mode.
Support for the old style customization mode has been deprecated in
X-driver intel
which requires the latest version
kernels. Users of other kernels should ensure that their settings are
contain CONFIG_DRM_I915_KMS=y
.
5.7.3. Connect input devices on the fly
The Xorg X server included in Debian 6.0 provides improved
support for connecting input devices (mouse, keyboard, tablets, ...) to
fly. The old packages xserver-xorg-input-kbd
and xserver-xorg-input-mouse
are replaced by xserver-xorg-input-evdev
, which requires a kernel with
enabled by parameter CONFIG_INPUT_EVDEV
. In addition to
Therefore, some key codes produced by this driver differ from
usually tied to the same keys. Program users
xmodmap and xbindkeys will be required
adjust your settings for the new keycodes.
5.7.4. "Switching" the X server
Traditionally, the keyboard shortcut
Ctrl + Alt + Backspace
terminated the X server process. This combination is no longer active by default,
but can be changed by reconfiguring the (system-wide) package keyboard-configuration
or using
desktop environment keyboard settings application.
5.8. Changing web path munin
In squeeze, the default location of the generated web content
munin was changed from /var/www/munin
to
/var/cache/munin/www
and so
/etc/munin/munin.conf
if it has been changed by the administrator. If you need to update,
read /usr/share/doc/munin/NEWS.