Search Results

Search found 6079 results on 244 pages for 'define'.

Page 55/244 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • SCORM and the Learning Management System (LMS)

    What actually is SCORM? SCORM, Shareable Content Object Reference Model, is a standard for web-based e-learning that has been developed to define communication between client-side content and a runti... [Author: Stuart Campbell - Computers and Internet - October 05, 2009]

    Read the article

  • Stairway to XML: Level 7 - Updating Data in an XML Instance

    You need to provide the necessary keywords and define the XQuery and value expressions in your XML DML expression in order to use the modify() method to update element and attribute values in either typed or untyped XML instances in an XML column. Robert Sheldon explains how. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • What is a Non-Functional Requirement?

    - by atconway
    In my breakdown of work I have to define work against 'Functional' and 'Non-Functional' design elements / work in my applications. I read the description from Wikipedia here: https://en.wikipedia.org/wiki/Non-functional_requirement but as typical the description did not speak exactly to me to clear up my understanding. Can someone please explain in terms of an example when creating an application from scratch, what would be defined as a 'Non-Functional' requirement?

    Read the article

  • The need for user-defined index types

    - by Greg Low
    Since the removal of the 8KB limit on serialization, the ability to define new data types using SQL CLR integration is now almost at a usable level, apart from one key omission: indexes. We have no ability to create our own types of index to support our data types. As a good example of this, consider that when Microsoft introduced the geometry and geography (spatial) data types, they did so as system CLR data types but also needed to introduce a spatial index as a new type of index. Those of us that...(read more)

    Read the article

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • AutoKey inserts blank lines

    - by Mike Pretzlaw
    Situation GNOME Shell AutoKey / autokey-gtk it has a predefined shortcut "adr" which should write an address after hitting space and a shortcut "date" which should write the current date (after hitting tab) Problem no matter what I define, "adr" or "date" it always inserts as much blank lines as the template uses Example: "date" should autocomplete after pressing space to "13/08/01" but it inserts one empty line "adr" should do my full address but it inserts 4 empty lines Question What could be wrong with my AutoKey? Do you need additional information?

    Read the article

  • Apache2 Syntex, cant acces 000-default

    - by enrique2334
    I have been using Apache2 and webmin with my raspberry pi. after a restart and reinstalations apache wont start. > sudo /etc/init.d/apache2 restart apache2: Syntax error on line 268 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/sites-enabled/000-default: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! The file 000-default is there and unopenable permisions to root-root My apache2.conf file looks like this (bottom half) # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see the comments above for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/ <VirtualHost *:80> DocumentRoot /var/www <Directory /var/www> allow from all Options +Indexes </Directory> ServerName IMASERVER </VirtualHost> does anyone know what the cause of this?

    Read the article

  • JavaScript: Code Folding

    - by Petr
    Today I would like to mentioned code folding in the new JavaScript editor support, which is available in the continual builds from our server. It's a basic feature, but was mentioned in a comment under the mentioned post. So you can fold comments and every code block between { and }. The current support allows only methods to be folded. The difference is shown below. In the picture on the left side is the current folding and on the right side the new one.   The code folding can be switched off in the Editor Options (Tools main menu -> Options -> Editor category -> General Tab). In this dialog you can also define which folds should be collapsed by default when you open a file. These options more closely fit Java editor needs, but you can see in the next picture how the options are mapped for JavaScript code.  The Method option folds all functions in the code. Other code blogs are fold through the option Tags and Other Code Blogs.  The documentation comments (starts with /**) are fold through Javadoc Comments and when you check Initial Comment, then all comments that start with /* are folded by default.  The new JavaScript editor also supports custom folds. To add your custom fold, type in two special comments as shown in this example: // <editor-fold> Your code goes here... // </editor-fold> You can define the default description of a collapsed fold by adding a "desc" attribute: // <editor-fold desc="This is my super secret genius code."> Your code goes here... // </editor-fold> You can set a fold to be collapsed by default by adding a "defaultstate" attribute: // <editor-fold defaultstate="collapsed"> Your code goes here... // </editor-fold> There is a code template that helps with writing custom fold comments. The abbreviation for the template is fcom. As I wrote the new JS support is available in the continual builds. Go here for more info.

    Read the article

  • Choose Custom New Tab Pages in Chrome

    - by Asian Angel
    For most people the default New Tab Page in Chrome works perfectly well for their purposes. But if you would prefer to choose what opens in a new tab for yourself then you will definitely want to have a look at the “Define your own new tab!” extension for Google Chrome. Before Unless you are using a Speed Dial (or similar) extension each time you click on the “New Tab Button” you get the same old page. It would certainly be a lot more satisfying if you could choose custom webpage(s) to open as new tabs. After Once you have the extension installed the best thing to do is click on the “New Tab Button”. That will open up the “Options Page” where you can enter one or two custom website URLs of your choosing. Once you have your custom URLs entered click on “Save”. As soon as you click on the “New Tab Button” your new custom webpage(s) will open. If you chose two webpages the first choice will open focused on the “right side” instead of the “left”. Clicking on the “Home Button” will also open the webpage(s) that you chose. The webpage(s) that you chose will also open as your starting “Home Pages” each time that you start your browser. Conclusion If you have wanted to choose your own custom “New Tab Page” then this is the extension that you have been waiting for. Links Download the Define your own new tab! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find Similar Websites in Google ChromeAccess Google Chrome’s Special Pages the Easy WayEnable Vista Black Style Theme for Google Chrome in XPSet Custom Reload Times for Individual Webpages in ChromeEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • SDL_BlitSurface segmentation fault (surfaces aren't null)

    - by Trollkemada
    My app is crashing on SDL_BlitSurface() and i can't figure out why. I think it has something to do with my static object. If you read the code you'll why I think so. This happens when the limits of the map are reached, i.e. (iwidth || jheight). This is the code: Map.cpp (this render) Tile const * Map::getTyle(int i, int j) const { if (i >= 0 && j >= 0 && i < width && j < height) { return data[i][j]; } else { return &Tile::ERROR_TYLE; // This makes SDL_BlitSurface (called later) crash //return new Tile(TileType::ERROR); // This works with not problem (but is memory leak, of course) } } void Map::render(int x, int y, int width, int height) const { //DEBUG("(Rendering...) x: "<<x<<", y: "<<y<<", width: "<<width<<", height: "<<height); int firstI = x / TileType::PIXEL_PER_TILE; int firstJ = y / TileType::PIXEL_PER_TILE; int lastI = (x+width) / TileType::PIXEL_PER_TILE; int lastJ = (y+height) / TileType::PIXEL_PER_TILE; // The previous integer division rounds down when dealing with positive values, but it rounds up // negative values. This is a fix for that (We need those values always rounded down) if (firstI < 0) { firstI--; } if (firstJ < 0) { firstJ--; } const int firstX = x; const int firstY = y; SDL_Rect srcRect; SDL_Rect dstRect; for (int i=firstI; i <= lastI; i++) { for (int j=firstJ; j <= lastJ; j++) { if (i*TileType::PIXEL_PER_TILE < x) { srcRect.x = x % TileType::PIXEL_PER_TILE; srcRect.w = TileType::PIXEL_PER_TILE - (x % TileType::PIXEL_PER_TILE); dstRect.x = i*TileType::PIXEL_PER_TILE + (x % TileType::PIXEL_PER_TILE) - firstX; } else if (i*TileType::PIXEL_PER_TILE >= x + width) { srcRect.x = 0; srcRect.w = x % TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } else { srcRect.x = 0; srcRect.w = TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } if (j*TileType::PIXEL_PER_TILE < y) { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE - (y % TileType::PIXEL_PER_TILE); dstRect.y = j*TileType::PIXEL_PER_TILE + (y % TileType::PIXEL_PER_TILE) - firstY; } else if (j*TileType::PIXEL_PER_TILE >= y + height) { srcRect.y = y % TileType::PIXEL_PER_TILE; srcRect.h = y % TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } else { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } SDL::YtoSDL(dstRect.y, srcRect.h); SDL_BlitSurface(getTyle(i,j)->getType()->getSurface(), &srcRect, SDL::getScreen(), &dstRect); // <-- Crash HERE /*DEBUG("i = "<<i<<", j = "<<j); DEBUG("srcRect.x = "<<srcRect.x<<", srcRect.y = "<<srcRect.y<<", srcRect.w = "<<srcRect.w<<", srcRect.h = "<<srcRect.h); DEBUG("dstRect.x = "<<dstRect.x<<", dstRect.y = "<<dstRect.y);*/ } } } Tile.h #ifndef TILE_H #define TILE_H #include "TileType.h" class Tile { private: TileType const * type; public: static const Tile ERROR_TYLE; Tile(TileType const * t); ~Tile(); TileType const * getType() const; }; #endif Tile.cpp #include "Tile.h" const Tile Tile::ERROR_TYLE(TileType::ERROR); Tile::Tile(TileType const * t) : type(t) {} Tile::~Tile() {} TileType const * Tile::getType() const { return type; } TileType.h #ifndef TILETYPE_H #define TILETYPE_H #include "SDL.h" #include "DEBUG.h" class TileType { protected: TileType(); ~TileType(); public: static const int PIXEL_PER_TILE = 30; static const TileType * ERROR; static const TileType * AIR; static const TileType * SOLID; virtual SDL_Surface * getSurface() const = 0; virtual bool isSolid(int x, int y) const = 0; }; #endif ErrorTyle.h #ifndef ERRORTILE_H #define ERRORTILE_H #include "TileType.h" class ErrorTile : public TileType { friend class TileType; private: ErrorTile(); mutable SDL_Surface * surface; static const char * FILE_PATH; public: SDL_Surface * getSurface() const; bool isSolid(int x, int y) const ; }; #endif ErrorTyle.cpp (The surface can't be loaded when building the object, because it is a static object and SDL_Init() needs to be called first) #include "ErrorTile.h" const char * ErrorTile::FILE_PATH = ("C:\\error.bmp"); ErrorTile::ErrorTile() : TileType(), surface(NULL) {} SDL_Surface * ErrorTile::getSurface() const { if (surface == NULL) { if (SDL::isOn()) { surface = SDL::loadAndOptimice(ErrorTile::FILE_PATH); if (surface->w != TileType::PIXEL_PER_TILE || surface->h != TileType::PIXEL_PER_TILE) { WARNING("Bad tile surface size"); } } else { ERROR("Trying to load a surface, but SDL is not on"); } } if (surface == NULL) { // This if doesn't get called, so surface != NULL ERROR("WTF? Can't load surface :\\"); } return surface; } bool ErrorTile::isSolid(int x, int y) const { return true; }

    Read the article

  • Render an image with layers for shadows /reflections, object and ground in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on the ground in the center. This object has shadows and reflections on the ground. How can I render an image containing 3 separate layers for The object The ground The reflection / shadow on the ground Which format do I use for this? (It should include all 3 layers + I should be able to enable/disable them in Photoshop) How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • No Android SDK, neither Java found

    - by Alex
    I have Java installed correctly, I did it by the manual http://www.wikihow.com/Install-Oracle-Java-on-Ubuntu-Linux I also installed Android SDK. However when I try to create a new Project IntelliJ Idea 12 and specify Project SDk choosing New - /home/alex/android-sdk-linux , it says me No Java SDK of appropriate version found. In addition to the Android SDK, you need to define a JSDK 1.5, 1.6 or 1.7 What did I miss?

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • Renaming Photos with digiKam

    <b>Scribbles and Snaps:</b> "Of course, renaming each and every photo by hand is not particularly practical, especially if you take dozens or even hundreds of photos each day. This is when digiKam's Rename feature can come in rather handy. You can use it to define rather advanced renaming rules and apply them to multiple photos in one fell swoop."

    Read the article

  • How to keep balance / Unlock items / achievement rules

    - by Mark Knol
    I'm working on an engine for a game, too learn javascript and just because its fun. I'm a flashdeveloper, I know how to build websites. Now making games is a different challenge, javascript is a challenge, but I'd love to learn how to structure code and what patterns are common. I dont mind if the game ever finish, I'm mostly interested in the programming part of it. I dont have a particular endresult in mind, so I'll see where it takes me. I currently have a system where you can buy items. The items cost a specified amount of gold, silver, diamonds etc. When you have selected and bought the item, it takes time before getting rewarded. When time is over, you are getting rewarded with other properties (gold, energy, diamonds). For example, you can buy an apple for 50gold, It takes a minute, you get rewarded with 75energy. Or if you take a run, it cost 50energy, it takes 5minutes, reward is 25gold and 25silver. These definitions is what i call actions. Currently I already have a system where this already works and I can define as much actions with as much properties as I want. The definitions I have kinda looks like this: {id:101, category:544, onInit:{gold:-75}, onComplete:{energy:75}, time:2000, name:"Apple", locked: false} {id:102, category:544, onInit:{gold:-135}, onComplete:{energy:145}, time:2000, name:"Banana", locked: false} {id:106, category:302, onInit:{energy:-50, power: -25}, onComplete:{gold:100, diamonds:2}, time:10000, name:"Run", locked: false} {id:107, category:302, onInit:{energy:-70, silver: -55}, onComplete:{gold:100}, time:10000, name:"Dance", locked: false} {id:108, category:302, onInit:{energy:-230, power: -355}, onComplete:{gold:70, silver:70}, time:10000, name:"Fitness", locked: false} Now, I would love to add a system where I can lock/unlock the actions using achievement rules. Lets say, if you buy 10 apples, you unlock a new action, like bananas which cost more, and reward more. In the future I maybe want to restrict achievements and actions to levels. I am kinda stuck how to structure this. I have 2 questions: Which patterns are used to define achievements? How/where are they defined? Should it be part of the action, or should it be a separate controller? Is it a good idea to register all completed actions to it? I think I want multiple types of achievement rules, Id love to hear some ideas how to develop it. How do you create/find a good balance, so the user does not get stuck or can cheat by repeat a pattern of actions to get too much rewards. I know there is not a simple answer and i'm lacking of a good game-concept, but I wonder if anyone created such a game and how you dealed and played with it.

    Read the article

  • What defines a language as a scripting language? [closed]

    - by Mathew Foscarini
    Possible Duplicate: What is the main difference between Scripting Languages and Programming Languages? I'd like to know what defines a language as a scripting language compared against other programming languages. Some possible scripting languages might include AutoCad LISP, Linux Bash, DOS Batch, Javascript or ActionScript in Flash. Where is the distinction made that makes a language a scripting language? Are there a set of clearly define rules to classify it as such?

    Read the article

  • How to: Apply themes using Server Object Model in SharePoint 2013 Preview

    - by panjkov
    One of new functionalities introduced in SharePoint 2013 Preview is new theming engine. Themes that are managed by this new engine don’t use Office Theme .thmx format and can’t be created using PowerPoint like it was case before. New themes are based on set of xml files stored in Theme Gallery “15” subfolder (on relative path _catalogs/theme/15): .spcolor files  which define color palettes for components of SharePoint interface .spfont files which contain set of predefined font schemes. There...(read more)

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • When are Private Clouds a Good Idea?

    This article is taken from the book "The Cloud at Your Service." The authors define the term private cloud and discuss issues to consider before opting for private clouds and concerns about deploying a private cloud.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >