Articles

1.7: On “alias” and “alibi”. The Object Group - Mathematics


It is fitting to conclude this review of algebraic preliminaries by formulating a rule that is to guide us in connecting the group theoretical concepts with physical principles.

One of the concerns of physicists is to observe, identify and classify particles. Pursuing this ob­jective we should be able to tell whether we observe the same object when encountered under different conditions in different states. Thus the identity of an object is implicitly given by the set of states in which we recognize it to be the same. It is plausible to consider the transformations which connect these states with each other, and to assume that they form a group. Accordingly, a precise way of identifying an object is to specify an associated object group.

The concept of object group is extremely general, as it should be, in view of the vast range of situations it is meant to cover. It is useful to consider specific situations in more detail.

First, the same object may be observed by different inertial observers whose findings are connected by the transformations of the inertial group, to be called also the passive kinematic group. Second, the space-time evolution of the object in a fixed frame of reference can be seen as generated by an active kinematic group. Finally, if the object is specified in phase space, we speak of the dynamic group.

The fact that linear transformations in a vector space can be given a passive and an active interpre­tation, is well known. In the mathematical literature these are sometimes designated by the colorful terms “alias” and “alibi,” respectively. The first means that the transformation of the basis leads to new “names” for the same geometrical, or physical objects. The second is a mapping by which the object is transformed to another ”location” with respect to the same frame.

The important groups of invariance are to be classified as passive groups. Without in any way minimizing their importance, we shall give much attention also to the active groups. This will enable us to handle, within a unified group-theoretical framework, situations commonly described in terms of equations of motion, and also the so-called “preparations of systems” so important in quantum mechanics.

It is the systematic joint use of “alibi” and “alias” that characterizes the following argument.


Using Painless in Kibana scripted fields

Kibana provides powerful ways to search and visualize data stored in Elasticsearch. For the purpose of visualizations, Kibana looks for fields defined in Elasticsearch mappings and presents them as options to the user building a chart. But what happens if you forget to define an important value as a separate field in your schema? Or what if you want to combine two fields and treat them as one? This is where Kibana scripted fields come into play.

Scripted fields have actually been around since the early days of Kibana 4. At the time they were introduced, the only way to define them relied on Lucene Expressions, a scripting language in Elasticsearch which deals exclusively with numeric values. As a result, the power of scripted fields was limited to a subset of use cases. In 5.0, Elasticsearch introduced Painless, a safe and powerful scripting language that allows operating on a variety of data types, and as a result, scripted fields in Kibana 5.0 are that much more powerful.

In the rest of this blog, we'll walk you through how to create scripted fields for common use cases. We'll do so by relying on a dataset from Kibana Getting Started tutorial and use an instance of Elasticsearch and Kibana running in Elastic Cloud, which you can spin up for free.

The following video walks you through how to spin up a personal Elasticsearch and Kibana instance in Elastic Cloud and load a sample dataset into it.


The acos() function computes the principal value of the arc cosine of __x. The returned value is in the range [0, pi] radians. A domain error occurs for arguments not in the range [-1, +1].

The asin() function computes the principal value of the arc sine of __x. The returned value is in the range [-pi/2, pi/2] radians. A domain error occurs for arguments not in the range [-1, +1].

The atan() function computes the principal value of the arc tangent of __x. The returned value is in the range [-pi/2, pi/2] radians.

The atan2() function computes the principal value of the arc tangent of __y / __x, using the signs of both arguments to determine the quadrant of the return value. The returned value is in the range [-pi, +pi] radians.

The cbrt() function returns the cube root of __x.

The ceil() function returns the smallest integral value greater than or equal to __x, expressed as a floating-point number.

The copysign() function returns __x but with the sign of __y. They work even if __x or __y are NaN or zero.

The cos() function returns the cosine of __x, measured in radians.

The cosh() function returns the hyperbolic cosine of __x.

The exp() function returns the exponential value of __x.

The fabs() function computes the absolute value of a floating-point number __x.

The fdim() function returns max(__x - __y, 0). If __x or __y or both are NaN, NaN is returned.

The floor() function returns the largest integral value less than or equal to __x, expressed as a floating-point number.

The fma() function performs floating-point multiply-add. This is the operation (__x * __y) + __z, but the intermediate result is not rounded to the destination type. This can sometimes improve the precision of a calculation.

The fmax() function returns the greater of the two values __x and __y. If an argument is NaN, the other argument is returned. If both arguments are NaN, NaN is returned.

The fmin() function returns the lesser of the two values __x and __y. If an argument is NaN, the other argument is returned. If both arguments are NaN, NaN is returned.

The function fmod() returns the floating-point remainder of __x / __y.

The frexp() function breaks a floating-point number into a normalized fraction and an integral power of 2. It stores the integer in the int object pointed to by __pexp.

If __x is a normal float point number, the frexp() function returns the value v , such that v has a magnitude in the interval [1/2, 1) or zero, and __x equals v times 2 raised to the power __pexp. If __x is zero, both parts of the result are zero. If __x is not a finite number, the frexp() returns __x as is and stores 0 by __pexp.

Note: This implementation permits a zero pointer as a directive to skip a storing the exponent.

The hypot() function returns sqrt(__x*__x + __y*__y). This is the length of the hypotenuse of a right triangle with sides of length __x and __y, or the distance of the point (__x, __y) from the origin. Using this function instead of the direct formula is wise, since the error is much smaller. No underflow with small __x and __y. No overflow if result is in range.

The isfinite() function returns a nonzero value if __x is finite: not plus or minus infinity, and not NaN.

The function isinf() returns 1 if the argument __x is positive infinity, -1 if __x is negative infinity, and 0 otherwise.

Note: The GCC 4.3 can replace this function with inline code that returns the 1 value for both infinities (gcc bug #35509).

The function isnan() returns 1 if the argument __x represents a "not-a-number" (NaN) object, otherwise 0.

The ldexp() function multiplies a floating-point number by an integral power of 2. It returns the value of __x times 2 raised to the power __exp.

The log() function returns the natural logarithm of argument __x.

The log10() function returns the logarithm of argument __x to base 10.

The lrint() function rounds __x to the nearest integer, rounding the halfway cases to the even integer direction. (That is both 1.5 and 2.5 values are rounded to 2). This function is similar to rint() function, but it differs in type of return value and in that an overflow is possible.

Returns: The rounded long integer value. If __x is not a finite number or an overflow was, this realization returns the LONG_MIN value (0x80000000).

The lround() function rounds __x to the nearest integer, but rounds halfway cases away from zero (instead of to the nearest even integer). This function is similar to round() function, but it differs in type of return value and in that an overflow is possible.

Returns: The rounded long integer value. If __x is not a finite number or an overflow was, this realization returns the LONG_MIN value (0x80000000).

The modf() function breaks the argument __x into integral and fractional parts, each of which has the same sign as the argument. It stores the integral part as a double in the object pointed to by __iptr.

The modf() function returns the signed fractional part of __x.

Note: This implementation skips writing by zero pointer. However, the GCC 4.3 can replace this function with inline code that does not permit to use NULL address for the avoiding of storing.


List of Data Models

  • Added WWC (5G Wireline Wireless Convergence), PDU (Protocol Data Unit) and FWE (5G Wireline wireless Encapsulation) top-level objects
  • Updated Cellular object to be applicable to 5G Residential Gateways
  • Extended support for TR-471 IP-layer metrics, including new IP-layer capacity test
  • Supported LAN device time-based access-control
  • Various Wi-Fi improvements

November 2020: Corrigendum 1

  • Added parameters for 3GPP SA5 Rel 11 and 12 (TS 32.452, TS 32.453)
  • Added parameters for 3GPP SA5 Rel 13 (CR S5-145293)
  • Added parameters for 3GPP SA5 Rel 13 (CR S5-146268)

September 2019: Corrigendum 1

Supported 3GPP releases 9 and 10

September 2019: Corrigendum 1

September 2019: Corrigendum 1

January 2020: Corrigendum 2


Math Functions¶

We’ll be using the following model in math function examples:

Returns the absolute value of a numeric field or expression.

It can also be registered as a transform. For example:

Returns the arccosine of a numeric field or expression. The expression value must be within the range -1 to 1.

It can also be registered as a transform. For example:

Returns the arcsine of a numeric field or expression. The expression value must be in the range -1 to 1.

It can also be registered as a transform. For example:

Returns the arctangent of a numeric field or expression.

It can also be registered as a transform. For example:

ATan2 ¶

Returns the arctangent of expression1 / expression2 .

Returns the smallest integer greater than or equal to a numeric field or expression.

It can also be registered as a transform. For example:

Returns the cosine of a numeric field or expression.

It can also be registered as a transform. For example:

Returns the cotangent of a numeric field or expression.

It can also be registered as a transform. For example:

Degrees ¶

Converts a numeric field or expression from radians to degrees.

It can also be registered as a transform. For example:

Returns the value of e (the natural logarithm base) raised to the power of a numeric field or expression.

It can also be registered as a transform. For example:

Floor ¶

Returns the largest integer value not greater than a numeric field or expression.

It can also be registered as a transform. For example:

Returns the natural logarithm a numeric field or expression.

It can also be registered as a transform. For example:

Accepts two numeric fields or expressions and returns the logarithm of the first to base of the second.

Accepts two numeric fields or expressions and returns the remainder of the first divided by the second (modulo operation).

Returns the value of the mathematical constant π .

Power ¶

Accepts two numeric fields or expressions and returns the value of the first raised to the power of the second.

Radians ¶

Converts a numeric field or expression from degrees to radians.

It can also be registered as a transform. For example:

Random ¶

Returns a random value in the range 0.0 ≤ x < 1.0 .

Round ¶

Rounds a numeric field or expression to the nearest integer. Whether half values are rounded up or down depends on the database.

It can also be registered as a transform. For example:

Returns the sign (-1, 0, 1) of a numeric field or expression.

It can also be registered as a transform. For example:

Returns the sine of a numeric field or expression.

It can also be registered as a transform. For example:

Returns the square root of a nonnegative numeric field or expression.

It can also be registered as a transform. For example:

Returns the tangent of a numeric field or expression.

It can also be registered as a transform. For example:


Extensions

Spock comes with a powerful extension mechanism, which allows to hook into a spec’s lifecycle to enrich or alter its behavior. In this chapter, we will first learn about Spock’s built-in extensions, and then dive into writing custom extensions.

Spock Configuration File

Some extensions can be configured with options in a Spock configuration file. The description for each extension will mention how it can be configured. All those configurations are in a Groovy file that usually is called SpockConfig.groovy . Spock first searches for a custom location given in a system property called spock.configuration which is then used either as classpath location or if not found as file system location if it can be found there, otherwise the default locations are investigated for a configuration file. Next it searches for the SpockConfig.groovy in the root of the test execution classpath. If there is also no such file, you can at last have a SpockConfig.groovy in your Spock user home. This by default is the directory .spock within your home directory, but can be changed using the system property spock.user.home or if not set the environment property SPOCK_USER_HOME .

Stack Trace Filtering

You can configure Spock whether it should filter stack traces or not by using the configuration file. The default value is true .

Built-In Extensions

Most of Spock’s built-in extensions are annotation-driven. In other words, they are triggered by annotating a spec class or method with a certain annotation. You can tell such an annotation by its @ExtensionAnnotation meta-annotation.

Ignore

To temporarily prevent a feature method from getting executed, annotate it with spock.lang.Ignore :

For documentation purposes, a reason can be provided:

To ignore a whole specification, annotate its class:

In most execution environments, ignored feature methods and specs will be reported as "skipped".

Care should be taken when ignoring feature methods in a spec class annotated with spock.lang.Stepwise since later feature methods may depend on earlier feature methods having executed.

IgnoreRest

To ignore all but a (typically) small subset of methods, annotate the latter with spock.lang.IgnoreRest :

@IgnoreRest is especially handy in execution environments that don’t provide an (easy) way to run a subset of methods.

Care should be taken when ignoring feature methods in a spec class annotated with spock.lang.Stepwise since later feature methods may depend on earlier feature methods having executed.

IgnoreIf

To ignore a feature method under certain conditions, annotate it with spock.lang.IgnoreIf , followed by a predicate:

To make predicates easier to read and write, the following properties are available inside the closure:

sys A map of all system properties

env A map of all environment variables

os Information about the operating system (see spock.util.environment.OperatingSystem )

jvm Information about the JVM (see spock.util.environment.Jvm )

Using the os property, the previous example can be rewritten as:

Care should be taken when ignoring feature methods in a spec class annotated with spock.lang.Stepwise since later feature methods may depend on earlier feature methods having executed.

Requires

To execute a feature method under certain conditions, annotate it with spock.lang.Requires , followed by a predicate:

Requires works exactly like IgnoreIf , except that the predicate is inverted. In general, it is preferable to state the conditions under which a method gets executed, rather than the conditions under which it gets ignored.

PendingFeature

To indicate that the feature is not fully implemented yet and should not be reported as error, annotate it with spock.lang.PendingFeature .

The use case is to annotate tests that can not yet run but should already be committed. The main difference to Ignore is that the test are executed, but test failures are ignored. If the test passes without an error, then it will be reported as failure since the PendingFeature annotation should be removed. This way the tests will become part of the normal tests instead of being ignored forever.

Groovy has the groovy.transform.NotYetImplemented annotation which is similar but behaves a differently.

it will mark failing tests as passed

if at least one iteration of a data-driven test passes it will be reported as error

it will mark failing tests as skipped

if at least one iteration of a data-driven test fails it will be reported as skipped

if every iteration of a data-driven test passes it will be reported as error

Stepwise

To execute features in the order that they are declared, annotate a spec class with spock.lang.Stepwise :

Stepwise only affects the class carrying the annotation not sub or super classes. Features after the first failure are skipped.

Stepwise does not override the behaviour of annotations such as Ignore , IgnoreRest , and IgnoreIf , so care should be taken when ignoring feature methods in spec classes annotated with Stepwise .

Timeout

To fail a feature method, fixture, or class that exceeds a given execution duration, use spock.lang.Timeout , followed by a duration, and optionally a time unit. The default time unit is seconds.

When applied to a feature method, the timeout is per execution of one iteration, excluding time spent in fixture methods:

Applying Timeout to a spec class has the same effect as applying it to each feature that is not already annotated with Timeout , excluding time spent in fixtures:

When applied to a fixture method, the timeout is per execution of the fixture method.

When a timeout is reported to the user, the stack trace shown reflects the execution stack of the test framework when the timeout was exceeded.

Retry

The @Retry extensions can be used for flaky integration tests, where remote systems can fail sometimes. By default it retries an iteration 3 times with 0 delay if either an Exception or AssertionError has been thrown, all this is configurable. In addition, an optional condition closure can be used to determine if a feature should be retried. It also provides special support for data driven features, offering to either retry all iterations or just the failing ones.

Retries can also be applied to spec classes which has the same effect as applying it to each feature method that isn’t already annotated with <@code Retry>.

A <@code @Retry>annotation that is declared on a spec class is applied to all features in all subclasses as well, unless a subclass declares its own annotation. If so, the retries defined in the subclass are applied to all feature methods declared in the subclass as well as inherited ones.

Given the following example, running FooIntegrationSpec will execute both inherited and foo with one retry. Running BarIntegrationSpec will execute inherited and bar with two retries.

To activate one or more Groovy categories within the scope of a feature method or spec, use spock.util.mop.Use :

This can be useful for stubbing of dynamic methods, which are usually provided by the runtime environment (e.g. Grails). It has no effect when applied to a helper method. However, when applied to a spec class, it will also affect its helper methods.

ConfineMetaClassChanges

To confine meta class changes to the scope of a feature method or spec class, use spock.util.mop.ConfineMetaClassChanges :

When applied to a spec class, the meta classes are restored to the state that they were in before setupSpec was executed, after cleanupSpec is executed.

When applied to a feature method, the meta classes are restored to as they were after setup was executed, before cleanup is executed.

RestoreSystemProperties

Saves system properties before the annotated feature method (including any setup and cleanup methods) gets run, and restores them afterwards.

Applying this annotation to a spec class has the same effect as applying it to all its feature methods.

AutoAttach

Automatically attaches a detached mock to the current Specification . Use this if there is no direct framework support available. Spring and Guice dependency injection is automatically handled by the Spring Module and Guice Module respectively.

AutoCleanup

Automatically clean up a field or property at the end of its lifetime by using spock.lang.AutoCleanup .

By default, an object is cleaned up by invoking its parameterless close() method. If some other method should be called instead, override the annotation’s value attribute:

If multiple fields or properties are annotated with AutoCleanup , their objects are cleaned up sequentially, in reverse field/property declaration order, starting from the most derived class class and walking up the inheritance chain.

If a cleanup operation fails with an exception, the exception is reported by default, and cleanup proceeds with the next annotated object. To prevent cleanup exceptions from being reported, override the annotation’s quiet attribute:

Title and Narrative

To attach a natural-language name to a spec, use spock.lang.Title :

Similarly, to attach a natural-language description to a spec, use spock.lang.Narrative :

To link to one or more references to external information related to a specification or feature, use spock.lang.See :

Issue

To indicate that a feature or spec relates to one or more issues in an external tracking system, use spock.lang.Issue :

If you have a common prefix URL for all issues in a project, you can use the Spock Configuration File to set it up for all at once. If it is set, it is prepended to the value of the @Issue annotation when building the URL.

If the issueNamePrefix is set, it is prepended to the value of the @Issue annotation when building the name for the issue.

Subject

To indicate one or more subjects of a spec, use spock.lang.Subject :

Additionally, Subject can be applied to fields and local variables:

Subject currently has only informational purposes.

Spock understands @org.junit.Rule annotations on non- @Shared instance fields. The according rules are run at the iteration interception point in the Spock lifecycle. This means that the rules before-actions are done before the execution of setup methods and the after-actions are done after the execution of cleanup methods.

ClassRule

Spock understands @org.junit.ClassRule annotations on @Shared fields. The according rules are run at the specification interception point in the Spock lifecycle. This means that the rules before-actions are done before the execution of setupSpec methods and the after-actions are done after the execution of cleanupSpec methods.

Include and Exclude

Spock is capable of including and excluding specifications according to their classes, super-classes and interfaces and according to annotations that are applied to the specification. Spock is also capable of including and excluding individual features according to annotations that are applied to the feature method. The configuration for what to include or exclude is done according to the Spock Configuration File section.

Optimize Run Order

Spock can remember which features last failed and how often successively and also how long a feature needed to be tested. For successive runs Spock will then first run features that failed at last run and first features that failed more often successively. Within the previously failed or non-failed features Spock will run the fastest tests first. This behaviour can be enabled according to the Spock Configuration File section. The default value is false .

Report Log

Spock can create a report log of the executed tests in JSON format. This report contains also things like @Title , @Narrative , @See and @Issue values or block descriptors. This report can be enabled according to the Spock Configuration File section. The default is to not generate this report.

For the report to be generated, you have to enable it and set at least the logFileDir and logFileName . enabled can also be set via the system property spock.logEnabled , logFileDir can also be set via the system property spock.logFileDir and logFileName can also be set via the system property spock.logFileName .

If a logFileSuffix is set (or the system property spock.logFileSuffix ), it is appended to the base filename, separated by a dash. If the suffix contains the string #timestamp , this is replaced by the current date and time in UTC automatically. If you instead want to have your local date and time, you can use the setting from the example below.

Third-Party Extensions

You can find a list of third-party extensions in the Spock Wiki.

Writing Custom Extensions

There are two types of extensions that can be created for usage with Spock. These are global extensions and annotation driven local extensions. For both extension types you implement a specific interface which defines some callback methods. In your implementation of those methods you can set up the magic of your extension, for example by adding interceptors to various interception points that are described below.

Which type of annotation you create depends on your use case. If you want to do something once during the Spock run - at the start or end - or want to apply something to all executed specifications without the user of the extension having to do anything besides including your extension in the classpath, then you should opt for a global extension. If you instead want to apply your magic only by choice of the user, then you should implement an annotation driven local extension.

Global Extensions

To create a global extension you need to create a class that implements the interface IGlobalExtension and put its fully-qualified class name in a file META-INF/services/org.spockframework.runtime.extension.IGlobalExtension in the class path. As soon as these two conditions are satisfied, the extension is automatically loaded and used when Spock is running. For convenience there is also the class AbstractGlobalExtension , which provides empty implementations for all methods in the interface, so that only the needed ones need to be overridden.

IGlobalExtension has the following three methods:

This is called once at the very start of the Spock execution.

This is called once for each specification. In this method you can prepare a specification with your extension magic, like attaching interceptors to various interception points as described in the chapter Interceptors.

This is called once at the very end of the Spock execution.

Annotation Driven Local Extensions

To create an annotation driven local extension you need to create a class that implements the interface IAnnotationDrivenExtension . As type argument to the interface you need to supply an annotation class that has @Retention set to RUNTIME , @Target set to one or more of FIELD , METHOD and TYPE - depending on where you want your annotation to be applicable - and @ExtensionAnnotation applied, with the IAnnotationDrivenExtension class as argument. Of course the annotation class can have some attributes with which the user can further configure the behaviour of the extension for each annotation application. For convenience there is also the class AbstractAnnotationDrivenExtension , which provides empty implementations for all methods in the interface, so that only the needed ones need to be overridden.

Your annotation can be applied to a specification, a feature method, a fixture method or a field. On all other places like helper methods or other places if the @Target is set accordingly, the annotation will be ignored and has no effect other than being visible in the source code.

IAnnotationDrivenExtension has the following five methods, where in each you can prepare a specification with your extension magic, like attaching interceptors to various interception points as described in the chapter Interceptors:

This is called once for each specification where the annotation is applied with the annotation instance as first parameter and the specification info object as second parameter.

visitFeatureAnnotation(T annotation, FeatureInfo feature)

This is called once for each feature method where the annotation is applied with the annotation instance as first parameter and the feature info object as second parameter.

visitFixtureAnnotation(T annotation, MethodInfo fixtureMethod)

This is called once for each fixture method where the annotation is applied with the annotation instance as first parameter and the fixture method info object as second parameter.

visitFieldAnnotation(T annotation, FieldInfo field)

This is called once for each field where the annotation is applied with the annotation instance as first parameter and the field info object as second parameter.

This is called once for each specification within which the annotation is applied to at least one of the supported places like defined above. It gets the specification info object as sole parameter. This method is called after all other methods of this interface for each applied annotation are processed.

Configuration Objects

You can add own sections in the Spock Configuration File for your extension by creating POJOs or POGOs that are annotated with @ConfigurationObject and have a default constructor (either implicitly or explicitly). The argument to the annotation is the name of the top-level section that is added to the Spock configuration file syntax. The default values for the configuration options are defined in the class by initializing the fields at declaration time or in the constructor. In the Spock configuration file those values can then be edited by the user of your extension.

To use the values of the configuration object in your extension, just define an uninitialized instance field of that type. Spock will then automatically create exactly one instance of the configuration object per Spock run, apply the settings from the configuration file to it (before the start() methods of global extensions are called) and inject that instance into the extension class instances.

A configuration object cannot be used exclusively in an annotation driven local extension, but it has to be used in at least one global extension to properly get initialized and populated with the settings from the configuration file. But if the configuration object is used in a global extension, you can also use it just fine in an annotation driven local extension. If the configuration object is only used in an annotation driven local extension, you will get an exception when then configuration object is to be injected into the extension and you will also get an error when the configuration file is evaluated and it contains the section, as the configuration object is not properly registered yet.

Interceptors

For applying the magic of your extension, there are various interception points, where you can attach interceptors from the extension methods described above to hook into the Spock lifecycle. For each interception point there can of course be multiple interceptors added by arbitrary Spock extensions (shipped or 3rd party). Their order is currently depending on the order they are added, but there should not be made any order assumptions within one interception point.

An ellipsis in the figure means that the block before it can be repeated an arbitrary amount of times.

The …​ method interceptors are of course only run if there are actual methods of this type to be executed (the white boxes) and those can inject parameters to be given to the method that will be run.

The difference between shared initializer interceptor and shared initializer method interceptor and between initializer interceptor and initializer method interceptor - as there can be at most one of those methods each - is, that there are only the two methods if there are @Shared , respectively non- @Shared , fields that get values assigned at declaration time. The compiler will put those initializations in a generated method and call it at the proper place in the lifecycle. So if there are no such initializations, no method is generated and thus the method interceptor is never called. The non-method interceptors are always called at the proper place in the lifecycle to do work that has to be done at that time.

To create an interceptor to be attached to an interception point, you need to create a class that implements the interface IMethodInterceptor . This interface has the sole method intercept(IMethodInvocation invocation) . The invocation parameter can be used to get and modify the current state of execution. Each interceptor must call the method invocation.proceed() , which will go on in the lifecycle, except you really want to prevent further execution of the nested elements like shown in the figure above. But this should be a very rare use case.

If you write an interceptor that can be used at different interception points and should do different work at different interception points, there is also the convenience class AbstractMethodInterceptor , which you can extend and which provides various methods for overriding that are called for the various interception points. Most of these methods have a double meaning, like interceptSetupMethod which is called for the setup interceptor and the setup method interceptor . If you attach your interceptor to both of them and need a differentiation, you can check for invocation.method.reflection , which will be set in the method interceptor case and null otherwise. Alternatively you can of course build two different interceptors or add a parameter to your interceptor and create two instances, telling each at addition time whether it is attached to the method interceptor or the other one.

Injecting Method Parameters

If your interceptor should support custom method parameters for wrapped methods, this can be done by modifying invocation.arguments . Two use cases for this would be a mocking framework that can inject method parameters that are annotated with a special annotation or some test helper that injects objects of a specific type that are created and prepared for usage automatically.

invocation.arguments may be an empty array or an array of arbitrary length, depending on what interceptors were run before that maybe also have manipulated this array for parameter injection. So if you for example investigated the method parameters with invocation.method.reflection.parameters and found that you want to inject the fifth parameter, you should first check whether the arguments array is at least five elements long. If not, you should assign it a new array that is at least five elements long and copy the contents of the old array into the new one. Then you can assign your objects to be injected.

When using data driven features (methods with a where: block), the user of your extension has to follow some restrictions, if parameters should be injected by your extension:

all data variables and all to-be-injected parameters have to be defined as method parameters

all method parameters have to be assigned a value in the where: block

the order of the method parameters has to be identical to the order of the data variables in the where: block

the to-be-injected parameters have to be set to any value in the where: block, for example null

of course you can also make your extension only inject a value if none is set already, as the where: block assignments happen before the method interceptor is called


Writing UDF’s using Java

To write a UDF using Java, we have to integrate the jar file Pig-0.15.0.jar. In this section, we discuss how to write a sample UDF using Eclipse. Before proceeding further, make sure you have installed Eclipse and Maven in your system.

Follow the steps given below to write a UDF function &minus

Open Eclipse and create a new project (say myproject).

Convert the newly created project into a Maven project.

Copy the following content in the pom.xml. This file contains the Maven dependencies for Apache Pig and Hadoop-core jar files.

Save the file and refresh it. In the Maven Dependencies section, you can find the downloaded jar files.

Create a new class file with name Sample_Eval and copy the following content in it.

While writing UDF’s, it is mandatory to inherit the EvalFunc class and provide implementation to exec() function. Within this function, the code required for the UDF is written. In the above example, we have return the code to convert the contents of the given column to uppercase.

After compiling the class without errors, right-click on the Sample_Eval.java file. It gives you a menu. Select export as shown in the following screenshot.

On clicking export, you will get the following window. Click on JAR file.

Proceed further by clicking Next> button. You will get another window where you need to enter the path in the local file system, where you need to store the jar file.

Finally click the Finish button. In the specified folder, a Jar file sample_udf.jar is created. This jar file contains the UDF written in Java.


The SELECT list supports the following syntax:

COLUMNS[n]
Array columns are used for reading data from text files. Use the columns[n] syntax in the SELECT list to return rows from text files in a columnar format. This syntax uses a zero-based index, so the first column is column 0.

DISTINCT
An option that eliminates duplicate rows from the result set, based on matching values in one or more columns.

expression
An expression formed from one or more columns that exist in the tables, files, or directories referenced by the query. An expression can contain functions and aliases that define select list entries. You can also use a scalar aggregate subquery as the expression in the SELECT list.

scalar aggregate subquery
A scalar aggregate subquery is a regular SELECT query in parentheses that returns exactly one column value from one row. The returned value is used in the outer query. The scalar aggregate subquery must include an aggregate function, such as MAX(), AVG(), or COUNT(). If the subquery returns zero rows, the value of the subquery expression is null. If it returns more than one row, Drill returns an error. Scalar subqueries are not valid expressions in the following cases:

AS column_alias
A temporary name for a column in the final result set. The AS keyword is optional.


3.1.3.1.7 sdtContent (Ruby Inline-Level Structured Document Tag Content)

This element specifies the last known contents of a structured document tag around one or more inline-level structures (runs, DrawingML objects, fields, and so on). This element's contents shall be treated as a cache of the contents to be displayed in the structured document tag for the following reasons:

If the structured document tag specifies an XML mapping via the dataBinding element (§" [ISO/IEC-29500-1] §17.5.2.6 dataBinding"), changes to the custom XML data part shall be reflected in the structured document tag as needed

If the contents of the structured document tag are placeholder text via the showingPlcHdr element (§"[ISO/IEC-29500-1] §17.5.2.39 showingPlcHdr"), then this content may be updated with the placeholder text stored in the Glossary Document part

[Example: Consider a structured document tag with the friendly name firstName that shall be located around two runs in a WordprocessingML document. This requirement would be specified as follows in the WordprocessingML:

The sdtContent element contains two adjacent runs (it is an inline-level structured document tag content container). end example]


Read the Docs

Read the Docs simplifies software documentation by automating building, versioning, and hosting of your docs for you.

Free docs hosting for open source

We will host your documentation for free, forever. There are no tricks. We help over 100,000 open source projects share their docs, including a custom domain and theme.

Always up to date

Whenever you push code to your favorite version control service, whether that is GitHub, BitBucket, or GitLab, we will automatically build your docs so your code and documentation are never out of sync.

Downloadable formats

We build and host your docs for the web, but they are also vieweable as PDFs, as single page HTML, and for eReaders. No additional configuration is required.

Multiple versions

We can host and build multiple versions of your docs so having a 1.0 version of your docs and a 2.0 version of your docs is as easy as having a separate branch or tag in your version control system.

Search all the docs


Watch the video: PROFIL TERLENGKAP ARYA SALOKA ALIAS ALDEBERAN (November 2021).