Search Results

Search found 4670 results on 187 pages for 'struts validation'.

Page 152/187 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Oracle VM networking under the hood and 3 new templates

    - by Chris Kawalek
    We have a few cool things to tell you about:  First up: have you ever wondered what happens behind the scenes in the network when you Live Migrate your Oracle VM server workload? Or how Oracle VM implements the network infrastructure you configure through your point & click action in the GUI? Really….how do they do this? For an in-depth view of the Oracle VM for x86 Networking model, Look ‘Under the Hood’ at Networking in Oracle VM Server for x86 with our best practices engineer in a blog post on OTN Garage. Next, making things simple in Oracle VM is what we strive every day to deliver to our user community. With that, we are pleased to bring you updates on three new Oracle Application templates: E-Business Suite 12.1.3 for Oracle ExalogicOracle VM templates for Oracle E-Business Suite 12.1.3 (x86 64-bit for Oracle Exalogic Elastic Cloud) contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install). For further details, please review the announcement.   JD Edwards EnterpriseOne 9.1 and JD Edwards EnterpriseOne Tools 9.1.2.1 for x86 servers and Oracle Exalogic The Oracle VM Templates for JD Edwards EnterpriseOne provide a method to rapidly install JD Edwards EnterpriseOne 9.1  and Tools 9.1.2.1. The complete stack includes Oracle Database 11g R2 and Oracle WebLogic Server 10.3.5 running on Oracle Linux 5. The templates can be installed to Oracle VM Server for x86 release 3.x and to the Oracle Exalogic Elastic Cloud.  PeopleSoft PeopleTools 8.5.2.10 for Oracle Exalogic This virtual deployment package delivers a "quick start" of PeopleSoft Middle-tier template on Oracle Linux for Oracle Exalogic Elastic Cloud. And last, are you wondering why we talk about “fast”, “rapid” when we refer to using Oracle VM templates to virtualize Oracle applications? Read the Evaluator Group Lab Validation report quantifying speeds of deployment up to 10x faster than with VMware vSphere. Or you can also check out our on demand webcast Quantifying the Value of Application-Driven Virtualization.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Non-English Character Display in Oracle SQL Developer

    - by thatjeffsmith
    I get a variation on this question at least once a week, if not more frequently. I’m from Israel, and the language on the databases is Hebrew. When I use the old and deprecated SQL*Plus (windows rich client) I can see the hebrew clearly, when I use the latest SQL Developer, I get gibberish. This question appears on the forums about every week or so as well. So what’s the deal? Well, it starts with a basic misunderstanding of NLS Client parameters. These should accurately reflect the language and locality setup on your LOCAL machine. DO NOT COPY what’s set in the database. The these parameters work together with the database so that information can be transferred back and forth correctly. Having the wrong NLS parameters locally can be bad. [ORACLE DOCS]Setting the NLS_LANG parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system character encoding to the database character set. When these settings are the same, Oracle Database assumes that the data being sent or received is encoded in the same character set as the database character set, so character set validation or conversion may not be performed. This can lead to corrupt data if conversions are necessary. OK, so what are you supposed to do? Set the Font! 9 times out of 10, this preference fixes the problem with display issues. Make sure you set a Font that supports the characters you’re trying to display. It’s as simple as that. This preference defines the font used to display characters in the editors and the data grids. If you have it set to a font that doesn’t have Hebrew character support – you’re not going to see Hebrew in SQL Developer. A few years ago…wow, like 15 years ago, I learned that the Tohama Font is pretty Unicode-friendly. Bad Font Selection A Font that’s not non-English friendly Good Font Selection Exact same text, except rendered with the Tahoma font Summary Having problems seeing non-English text in SQL Developer? Check the font! And do not start messing with NLS parameters without talking to your DBA first.

    Read the article

  • Announcement: Employee Info Starter Kit (v6.0–ASP.NET MVC Edition) is Released

    - by Mohammad Ashraful Alam
    Originally posted on: http://geekswithblogs.net/joycsharp/archive/2013/06/16/announcement-employee-info-starter-kit-v6.0asp.net-mvc-edition-is-released.aspxAfter a long wait, the next version of Employee Info Starter Kit is released! This starter kit is basically a project template that contains code samples targeting a specific technology, such as ASP.NET Web Form, ASP.NET MVC etc. Since its first release, this open source project gained a huge popularity in the developer community and had 250K+ combined downloads. This starter kit is honored to be placed at the official ASP.NET site, along with other asp.net starter kits, which all are being considered as the “best” ASP.NET coding standards, recommended by Microsoft. EISK is showcased in Microsoft’s Channel 9’s Weekly Show, as well. The ASP.NET MVC Edition of the new version 6.0 bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End Specifications Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Role based security model Key Technology Areas ASP.NET MVC 4 Entity Framework 4.3.1 Sql Server Compact Edition 4 Visual Studio 2012 QuickStart Guide Getting started with EISK 6.0 ASP.NET is pretty easy. Once you've Visual Studio 2012 installed, then just follow the steps as provided below: Download the EISK 6.0 MVC version. Extract the file. From the extracted folder, click the solution file "Eisk.MVC-VS2012.sln". Right click the "Eisk.MVC" project node and select "Select set as StartUp Project". Hit Ctrl+F5 and explore! Architectural Overview Overall architecture is based on Model-View-Controller pattern Support for desktop & mobile browsers. Usage of Domain Model, Repository and Unit of Work pattern from Domain Driven Development approach Usage of Data Annotations in model (entity) classes to centralize basic validation mechanism that facilitates DRY principle Usage of IValidatableObject interface in model (entity) classes that isolates custom business logic from application layer Usage of OOP inheritance and Value Object pattern in model (entity) classes that provides reusability in application architecture Usage of View Model, Editor Model pattern that provides mechanism for testable view rendering logic Several helper classes and extension methods to enable developers build application with reduced code If you want to learn more about it in details, just check the following links: Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture Enjoy!

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • links for 2011-01-04

    - by Bob Rhubart
    Webcasts (tags: ping.fm) Five Key Trends in Enterprise 2.0 for 2011 (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares insight from Oracle's Andy MacMillan. (tags: oracle otn enterprise2.0) Victor Bax: Lost in Service Oriented Architecture? "SOA is a concept, no more, no less. SOA is not a technology, or a piece of software. It is an architecture, a model." - Victor Bax (tags: oracle soa) Jan-Leendert: Oracle 11g SOA Suite read multi record data from csv file with the file adapter (master-detail) "The file adapter is a very powerlful tool to read files with structured data. Most of the time you will read simple csv files with one record per row. But what if your csv file contains multiple records with different types?" - Jan-Leendert (tags: oracle soa soasuite) @myfear: Five ways to know how your data looked in the past. Entity Auditing. "Whatever requirements you have. I can promise you, that it will never be a simple solution. In general it's best to evaluate your purpose for auditing in detail." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java) @fteter: Buffing Up The Crystal Ball "While I'm already tired of seeing these types of posts (I'm writing on New Year's Day), I'm also feeling guilty about not making my own set of predictions." - Oracle ACE Director Floyd Teter (tags: oracle otn oracleace ec2 cloud fusionmiddleware) @bex: ECM New Year's Resolutions "Happy new year! Most people use the first post of the year to go over their own blog statistics of popular posts... but since my blog's fiscal year ends in April, I decided to do new years resolutions instead." - Oracle ACE Director Bex Huff (tags: oracle otn oracleace ecm enterprise2.0) Izaak de Hullu: Embedded Java in a 11g BPEL process "In an earlier blog my colleague Peter Ebell explained how you can create an extension of com.collaxa.cube.engine.ext.BPELXExecLet to do your coding in a regular Java environment so you have code completion and validation..." - Izaak de Hullu (tags: oracle otn bpel java soa) @gschmutz: Cannot access EM console after installing SOA Suite 11g PS2 Oracle ACE Director Guido Schmutz encounters a problem and shares the solution. (tags: oracle otn oracleace soa soasuite)

    Read the article

  • Best practice for marking a bug as resolved in Bugzilla ?

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • Helper method to Replace/Remove characters that do not match the Regular Expression

    - by Michael Freidgeim
    I have a few fields, that use regEx for validation. In case if provided field has unaccepted characters, I don't want to reject the whole field, as most of validators do, but just remove invalid characters. I am expecting to keep only Character Classes for allowed characters and created a helper method to strip unaccepted characters. The allowed pattern should be in Regex format, expect them wrapped in square brackets. function will insert a tilde after opening squere bracket , according to http://stackoverflow.com/questions/4460290/replace-chars-if-not-match.  [^ ] at the start of a character class negates it - it matches characters not in the class.I anticipate that it could work not for all RegEx describing valid characters sets,but it works for relatively simple sets, that we are using.         /// <summary>               /// Replaces  not expected characters.               /// </summary>               /// <param name="text"> The text.</param>               /// <param name="allowedPattern"> The allowed pattern in Regex format, expect them wrapped in brackets</param>               /// <param name="replacement"> The replacement.</param>               /// <returns></returns>               /// //        http://stackoverflow.com/questions/4460290/replace-chars-if-not-match.               //http://stackoverflow.com/questions/6154426/replace-remove-characters-that-do-not-match-the-regular-expression-net               //[^ ] at the start of a character class negates it - it matches characters not in the class.               //Replace/Remove characters that do not match the Regular Expression               static public string ReplaceNotExpectedCharacters( this string text, string allowedPattern,string replacement )              {                     allowedPattern = allowedPattern.StripBrackets( "[", "]" );                      //[^ ] at the start of a character class negates it - it matches characters not in the class.                      var result = Regex .Replace(text, @"[^" + allowedPattern + "]", replacement);                      return result;              }static public string RemoveNonAlphanumericCharacters( this string text)              {                      var result = text.ReplaceNotExpectedCharacters(NonAlphaNumericCharacters, "" );                      return result;              }        public const string NonAlphaNumericCharacters = "[a-zA-Z0-9]";There are a couple of functions from my StringHelper class  http://geekswithblogs.net/mnf/archive/2006/07/13/84942.aspx , that are used here.    //                           /// <summary>               /// 'StripBrackets checks that starts from sStart and ends with sEnd (case sensitive).               ///           'If yes, than removes sStart and sEnd.               ///           'Otherwise returns full string unchanges               ///           'See also MidBetween               /// </summary>               /// <param name="str"></param>               /// <param name="sStart"></param>               /// <param name="sEnd"></param>               /// <returns></returns>               public static string StripBrackets( this string str, string sStart, string sEnd)              {                      if (CheckBrackets(str, sStart, sEnd))                     {                           str = str.Substring(sStart.Length, (str.Length - sStart.Length) - sEnd.Length);                     }                      return str;              }               public static bool CheckBrackets( string str, string sStart, string sEnd)              {                      bool flag1 = (str != null ) && (str.StartsWith(sStart) && str.EndsWith(sEnd));                      return flag1;              }               public static string WrapBrackets( string str, string sStartBracket, string sEndBracket)              {                      StringBuilder builder1 = new StringBuilder(sStartBracket);                     builder1.Append(str);                     builder1.Append(sEndBracket);                      return builder1.ToString();              }v

    Read the article

  • What do you think of the EntLib 5.0 configuration tool?

    Hello again! Its been a while, I know. Ive been busy over the last few months with several projects, some of them software related, and one of them human my son Jesse was born on 26 February 2010. Fun times! Meanwhile, back in Redmond, the p&p team has been busy working on Enterprise Library 5.0 see Grigoris announcement for details on the beta. Theres a ton of new stuff in this release, but theres one big new feature that hasnt received a lot of attention that Im keen to hear your perspectives on. The change is the biggest overhaul to the configuration tool since Enterprise Library was launched. If you havent yet grabbed the EntLib 5.0 beta, heres a before and after shot of the config tool: Enterprise Library 4.1 config tool Enterprise Library 5.0 (beta 1) config tool The tool has been rebuilt from the ground up in response to some feedback and usability studies from the previous version of the tool. But is this a step in the right direction? Id love to hear what you think. If youve downloaded EntLib 5.0 and tried out the tool, please share your thoughts on: First impressions. Is the tool easy to understand? Easy to find what youre looking for? Easy to read existing configuration? Pretty? Ease of use for real life tasks. Rather than make up your own tasks, here are a few sample scenarios you might want to try: Configure the data access block with a SQL Server connection called Audit that points to a database called Audit on a server called DB Configure the logging block so that any log entries in the Audit category are written to both the Event Log and the Audit database (see above) Configure the validation block with a ruleset called Email Address that uses an appropriate regular expression for e-mail addresses Configure the policy injection block such that any calls to classes in the MyCompany.Security namespace are logged before and after the call using the Audit category (see above) Comparison with the old config tool. What do you like better in the new tool? What did you like better in the old tool? How do you rate your level of expertise using the old tool? Keep in mind that I no longer work in the p&p team, so I cant say how any of this feedback will be used (although Im sure the team is listening!). However since Ive invested so much time in Enterprise Library, both in leading the team and using the product on real projects Im very interested to hear what you all think of the tools new direction.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Developer – Cross-Platform: Fact or Fiction?

    - by Pinal Dave
    This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University. It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources? We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ?me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet. There are some tricks of the trade to develop cross-platform, including using languages and environments that ?run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries. As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input. What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape?  Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us Republished with permission from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Intel

    Read the article

  • CAPTCHA blocking for my scraping script?

    - by Surabhil Sergy
    I am working on a scraping project which involves getting web data and parsing them for further use. I have been working using PHP and CURL to make scraping scripts which crawls web data and I make use of either PHP Dom or Simple HTML DOM Parser library for these kinds of projects. On a recent project I encountered some challenges; initially I found the target website blocked my server IP such that the server could not make any successful requests to the site. Understanding these issues as common I bought a set of private proxies and tried to make request calls using them. Though this could get successful response, I noticed the script is getting some kind of blocks after 2-3 continuous requests. On printing and checking the response I could see a pop-up asking for CAPTCHA validation. I could not see any captcha characters to be entered and it also shows an error “input error: invalid referrer”. On examining the source I could see some Google recaptcha scripts within. I’m stuck at this point and I m not able to execute my script. My script is used for gathering data and it needs to go through a large number of pages periodically over the site. But in the current scenario I am not able to proceed with my script. I could see there are some options to overcome these captcha issues and scraping these kinds of sites too are common. I have been checking my script performance and responses over last two months. I could see during first month I was able to execute very large number of requests from a single IP and I was able to get results. Later I get an IP block and used private proxies which could get me some results. Later I am facing now with the captcha trouble. I would appreciate any help or suggestions in this regard. (Often in this kind of questions I used to get a first comment as, ‘Have you asked for prior permission from the target?’ .I haven’t ,but I know there are many sites doing so to get the details out of sites and target sites may not often give access to them. I respect the legality and scraping etiquettes but I would like to know at what point I stuck and how could I overcome that! ) I could provide any supporting information if needed.

    Read the article

  • JUDCon 2013 Trip Report

    - by reza_rahman
    JUDCon (JBoss Users and Developers Conference) 2013 was held in historic Boston on June 9-11 at the Hynes Convention Center. JUDCon is the largest get together for the JBoss community, has gone global in recent years but has it's roots in Boston. The JBoss folks graciously accepted a Java EE 7 talk from me and actually referenced my talk in their own sessions. I am proud to say this is my third time speaking at JUDCon/the Red Hat Summit over the years (this was the first time on behalf of Oracle). I had great company with many of the rock stars of the JBoss ecosystem speaking such as Lincoln Baxter, Jay Balunas, Gavin King, Mark Proctor, Andrew Lee Rubinger, Emmanuel Bernard and Pete Muir. Notably missing from JUDCon were Bill Burke, Burr Sutter, Aslak Knutsen and Dan Allen. Topics included Java EE, Forge, Arquillian, AeroGear, OpenShift, WildFly, Errai/GWT, NoSQL, Drools, jBPM, OpenJDK, Apache Camel and JBoss Tools/Eclipse. My session titled "JavaEE.Next(): Java EE 7, 8, and Beyond" went very well and it was a full house. This is our main talk covering the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1, Java EE Concurrency and the rest of the APIs in Java EE 7. I also briefly talked about the possibilities for Java EE 8. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman Besides presenting my talk, it was great to catch up with the JBoss gang and attend a few interesting sessions. On Sunday night I went to one of my favorite hangouts in Boston - the exalted Middle East Club as Rolling Stone refers to it (other cool spots in an otherwise pretty boring town is "the Church"). As contradictory as it might sound to the uninitiated, the Middle East Club is possibly the best place in Boston to simultaneously get great Middle Eastern (primarily Lebanese) food and great underground metal. For folks with a bit more exposure, this is probably not contradictory at all given bands like Acrassicauda and documentaries like Heavy Metal in Baghdad. Luckily for me they were featuring a few local Thrash metal bands from the greater Boston area. It wasn't too bad considering it was primarily amateur twenty-something guys (although I'm not sure I'm a qualified critic any more since I all but stopped playing about at that age). It's great Boston has the Middle East as an incubator to keep the rock, metal, folk, jazz, blues and indie scene alive. I definitely enjoyed JUDCon/Boston and hope to be part of the conference next year again.

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • More on Map Testing

    - by Michael Stephenson
    I have been chatting with Maurice den Heijer recently about his codeplex project for the BizTalk Map Testing Framework (http://mtf.codeplex.com/). Some of you may remember the article I did for BizTalk 2009 and 2006 about how to test maps but with Maurice's project he is effectively looking at how to improve productivity and quality by building some useful testing features within the framework to simplify the process of testing maps. As part of our discussion we realized that we both had slightly different approaches to how we validate the output from the map. Put simple Maurice does some xpath validation of the data in various nodes where as my approach for most standard cases is to use serialization to allow you to validate the output using normal MSTest assertions. I'm not really going to go into the pro's and con's of each approach because I think there is a place for both and also I'm sure others have various approaches which work too. What would be great is for the map testing framework to provide support for different ways of testing which can cover everything from simple cases to some very specialized scenarios. So as agreed with Maurice I have done the sample which I will talk about in the rest of this article to show how we can use the serialization approach to create and compare the input and output from a map in normal development testing. Prerequisites One of the common patterns I usually implement when developing BizTalk solutions is to use xsd.exe to create .net classes for most of the schemas used within the solution. In the testing pattern I will take advantage of these .net classes. The Map In this sample the map we will use is very simple and just concatenates some data from the input message to the output message. Hopefully the below picture illustrates this well. The Test In the test I'm basically taking the following actions: Use the .net class generated from the schema to create an input message for the map Serialize the input object to a file Run the map from .net using the standard BizTalk test method which was generated for running the map Deserialize the output file from the map execution to a .net class representing the output schema Use MsTest assertions to validate things about the output message The below picture shows this: As you can see the code for this is pretty simple and it's all strongly typed which means changes to my schema which can affect the tests can be easily picked up as compilation errors. I can then chose to have one test which validates most of the output from the map, or to have many specific tests covering individual scenarios within the map. Summary Hopefully this post illustrates a powerful yet simple way of effectively testing many BizTalk mapping scenarios. I will probably have more conversations with Maurice about these approaches and perhaps some of the above will be included in the mapping test framework.   The sample can be downloaded from here: http://cid-983a58358c675769.office.live.com/self.aspx/Blog%20Samples/More%20Map%20Testing/MapTestSample.zip

    Read the article

  • Embracing Imperfection

    - by Johnm
    The pursuit of perfection is a road on which we often find ourselves traveling. It is an unpaved road filed with pot-holes and ruts that often destroy our stride. The shoulders of this road are lined with the bones and rotting carcasses of well planned projects, solutions and dreams of others who have dared the journey. Often the choice to engage in this travel is a compulsive one. We can't help but to pack our bags and make the trip. We justify it by equating it to the delivery of a quality product or service. We use our past travels as validation of our worthiness and value. Our shared experience, as tortured pilgrims of perfection, reveals that each odyssey that bewitched us resulted in a stark reminder of the very weaknesses and fears that we were attempting to mollify. The voice of the critic that berated us for the lack of craftsmanship was our own. Although, at the end of the journey our own critical voice was joined by the gnashing of teeth of those who could not reap the fruit of your labor due to its lack of timely delivery. There is another road in which to travel. It is the pursuit of embracing imperfection. The cost of traveling this route is your contribution to its eternal construction. Each segment is designed uniquely. At times it has the appearance of a patchwork quilt; while other times it is well organized and highly measured. In all cases, its construction has continually advanced and been utilized as each segment was delivered by its architect. Those who choose to select this spindle of these crossroads crack open the shells of their fears to reveal the vapor that is within. They construct their houses upon these shells. Through their hunger for mastery they wring every drop of nectar from failure and discard its husks to the ditches of this road. Through their efforts the thoroughfare begins to develop a personality of its own, a beautifully human one, rich with the strengths and weaknesses of all of its contributors. Like many of us, the pursuit of perfection has not served me well. In fact, I would say that it has been more damaging than it has been helpful. While the perfectionist in me occasionally makes its presence known, I consider myself a "recovering perfectionist". It is evident to me that there is immense beauty found in imperfection. I choose to embrace it. It is grounding. It is constructive. It is honest.

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Switch from back-end to front-end programming: I'm out of my comfort zone, should I switch back?

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Partners - Steer Clear of the Unknown with Oracle Enterprise Manager12c and Plug-in Extensibility

    - by Get_Specialized!
    Imagine if you just purchased a new car and as you entered the vehicle to drive it home and you found there was no steering wheel. And upon asking the dealer you were told that it was an option and you had a choice now or later of a variety of aftermarket steering wheels that fit a wide variety of automobiles. If you expected the car to already have a steering wheel designed to manage your transportation solution, you might wonder why someone would offer an application solution where its management is not offered as an option or come as part of the solution... Using management designed to support the underlying technology and that can provide management and support  for your own Oracle technology based solution can benefit your business  a variety of ways: increased customer satisfaction, reduction of support calls, margin and revenue growth. Sometimes when something is not included or recommended , customers take their own path which may not be optimal when using your solution and has later impact on the customers satisfaction or worse a negative impact on their business. As an Oracle Partner, you can reduce your research, certification, and time to market by selecting and offering management designed, developed, and supported for Oracle product technology by Oracle with Oracle Enterprise Manager 12c. For partners with solution specific management needs or seeking to differentiate themselves in the market, Enterprise Manager 12c is extensible and provides partners the opportunity to create their own plug-ins as well as a validation program for them.  Today a number of examples by partners are available and Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} more on the way from such partners as NetApp for NetApp storage and Blue Medora for VMware vSphere. To review and consider further for applicability to your solution, visit  the Oracle PartnerNetwork KnowledgeZone for Enterprise Manager under the Develop Tab http://www.oracle.com/partners/goto/enterprisemanager

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric (2D) game with moderate-scale multiplayer - 20-30 players. I've had some difficulty getting a good movement prediction implementation in place. Right now, clients are authoritative for their own position. The server performs validation and broad-scale cheat detection, and I fully realize that the system will never be fully robust against cheating. However, the performance and implementation tradeoffs work well for me right now. Given that I'm dealing with sprite graphics, the game has 8 defined directions rather than free movement. Whenever the player changes their direction or speed (walk, run, stop), a "true" 3D velocity is set on the entity and a packet it sent to the server with the new movement state. In addition, every 250ms additional packets are transmitted with the player's current position for state updates on the server as well as for client prediction. After the server validates the packet, it gets automatically distributed to all of the other "nearby" players. Client-side, all entities with non-zero velocity (ie/ moving entities) are tracked and updated by a rudimentary "physics" system - basically nothing more than changing the position by the velocity according to the elapsed time slice (40ms or so). What I'm struggling with is how to implement clean movement prediction. I have the nagging suspicion that I've made a design mistake somewhere. I've been over the Unreal, Half-life, and all other movement prediction/lag compensation articles I could find, but they all seam geared toward shooters: "Don't send each control change, send updates every 120ms, server is authoritative, client predicts, etc". Unfortunately, that style of design won't work well for me - there's no 3D environment so each individual state change is important. 1) Most of the samples I saw tightly couple movement prediction right into the entities themselves. For example, storing the previous state along with the current state. I'd like to avoid that and keep entities with their "current state" only. Is there a better way to handle this? 2) What should happen when the player stops? I can't interpolate to the correct position, since they might need to walk backwards or another strange direction if their position is too far ahead. 3) What should happen when entities collide? If the current player collides with something, the answer is simple - just stop the player from moving. But what happens if two entities take up the same space on the server? What if the local prediction causes a remote entity to collide with the player or another entity - do I stop them as well? If the prediction had the misfortune of sticking them in front of a wall that the player has gone around, the prediction will never be able to compensate and once the error gets to high the entity will snap to the new position.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >