Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 435/1148 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • Platinum Services – The Highest Level of Service in the Industry

    - by cwarticki
    Oracle Platinum Services provides remote fault monitoring with faster response times and patch deployment services to qualified Oracle Premier Support customers – at no additional cost. We know that disruptions in IT systems availability can seriously impact business performance. That’s why we engineer our hardware and software to work together. Oracle engineered systems are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. And now, customers who choose the extreme performance of Oracle engineered systems have the power to access the added support they need – Oracle Platinum Services – to further optimize for high availability at no additional cost.  In addition to receiving the complete support essentials with Oracle Premier Support, qualifying Oracle Platinum Services customers also receive: •     24/7 Oracle remote fault monitoring •    Industry-leading response and restore times o   5-Minute Fault Notification o   15-Minute Restoration or Escalation to Development o   30-Minute Joint Debugging with Development •    Update and patch deployment Visit us online to learn more about how to get Oracle Platinum Services

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Partner Webcast – Oracle Weblogic 12c for New Projects - 07 Nov 2013

    - by Thanos Terentes Printzios
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications.WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deployingenterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013 -  9am UTC/11am EET Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • Oracle Weblogic 12c for New Projects–Webcast November 7th 2013

    - by JuergenKress
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications. WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deploying enterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013   9am UTC/11am EET REGISTER NOW WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: education,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Partner Spotlight: Compasso

    - by Joe Diemer
    EVERY MONDAY at 2 PM Eastern Time (USA) Join Compasso, one of Oracle's Enterprise Manager partners that is Specialized in Application Quality Management, for a series of presentations focused on Oracle end to end performance management. The objective is to provide a comprehensive overview of Oracle's Enterprise Manager 12c software.  Attendees will learn how Oracle can provide a complete end to end management  solution, for not only for Oracle Databases, but also other aspects of the Oracle technology, such as Middleware and Oracle applications.  Key areas include: 1. Next-Generation Management Framework which cover: Better performance and scalability Modular, extensible architecture Easier to manage and diagnose Enhanced security Web 2.0 UI 2. Enhanced Application-to-Disk Management End-to-end application performance management Fusion Applications management Exadata and Exalogic management 3. Complete Lifecycle Management for Enterprise Private Cloud Self-service provisioning Policy based resource and workload management Chargeback There are two ways you can sign up: 1. Call Nicole Wakefield of Compasso at +1 (971) 245-5042 2. Email [email protected] with Subject Line of "Monday Enterprise Manager Presentations" WebEx Conference details will be provided with your confirmation For more information about partner opportunities with Enterprise Manager, including becoming Specialized, click here. For more information about Compasso, click here.

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • ?????????????????:???????Oracle Solaris Studio 12.3?

    - by kazun
    2011?12?16??????????????????????? Oracle Solaris Studio 12.3 ??????????? Oracle Solaris Studio 12.3 ??C?C++?Fortran?????????????????????????????·??????????SPARC T4?x86??????????????????????????????300%??????????????????Studio 12.3 ?Oracle Solaris?Oracle Linux?Red Hat Enterprise Linux ???????????????????????? ??????Oracle Solaris Studio 12.3???????????????????? Oracle Solaris Studio 12.3 ?3?????? - ?????????????????????????? - ??????? - ??????·??????????????? Oracle Solaris Studio 12.3 ??? ???????????????? SPARC-T4???????????GCC???????300%, x86??????150%????????????????Sun Studio 12??????SPARC-T4?40%?x86?20%??????????????? ???????????????? ?????????????????????????????????Code Analyzer??????????????????????????????????????????Performance Analyzer???????????????????????????????????? ???????? Oracle Solaris?Oracle Linux??OS??????????????????????????·???????????????????????????????????????????????????20%???????????????????????????????·??????(SSH??)???????????Oracle Solaris?Linux?Windows?Mac OS?????????Oracle Solaris?Linux??????????????????????????????????????????·??????????????????Oracle Database????????????????????Pro*C ??????Oracle Solaris Studio?????????? Oracle Solaris Studio 12.3 ??? ?:Oracle Solaris Studio ??? Compiler Suite C/C++?Fortran ??????????????????????????(?????????????????)?????????????????·???????????????????????????????????????????? Analysis Suite ?Performance Analyzer??Code Analyzer??Thread Analyzer??3???????????????Code Analyzer?????????????????·???????????????????????????Performance Analyzer??????????????????????????????·??????????????????????????????????????Thread Analyzer????????????????????????????Solaris ?????P-?????OpenMP3.1???????????????????????????????????????????????????????????????????????????????????? ?:Code Analyzer ?????IDE?? ?Oracle Solaris Studio????????????(IDE)???????NetBeans???????????????????Oracle DB?MySQL???????Pro*C?OCI????????????????????????????·??????????????????????????????????????????????????????????????????????????????? Oracle Solaris Studio 12.3???? ???????????????Solaris Studio 12.3???????????????????????·?????????????????·??????????????????????????????? ??????????? ?Oracle Solaris Studio 12.3???????????????? ?????? Solaris Studio 12.3 ????????? ?????? ??????????????????????????????????????Solaris Studio ??????????????????????????????????????Oracle Solaris Studio ??????????????????? Oracle Solaris Studio Oracle Solaris

    Read the article

  • cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number

    - by Joao Figueiredo
    I've a cron scheduled query which is failing with, File "./run_ora_query.py", line 69, in db_lookup cursor.execute(query, dict(time_key=time_key) ) cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number where >>> dict(time_key=time_key) {'time_key': '12/10/2012 19:12:00'} I'm using a .yaml file to update the last time_key after each query runs, where the relevant parameters are, {query: 'select session_mode, inst_id, user_name, schema_name, os_user, process_id, process_mb_use, process_name, to_char(datet,''dd-mm-yyyy hh24:mi'') as DATETIME from os_admin.mem_usage where data > TO_DATE(:time_key,''dd-mm-yyyy hh24:mi:ss'') order by datet, inst_id, os_user', time_key: '12/10/2012 19:12:00'} Where is the culprit for this error?

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • ASP.NET MVC Html.BeginRouteForm render's action with problem

    - by Sadegh
    hi, i defined this route: context.MapRoute("SearchEngineWebSearch", "{culture}/{style}/search/web/{query}/{index}/{size}", new { controller = "search", action = "web", query = "", index = 0, size = 5 }, new { index = new UInt32RouteConstraint(), size = new UInt32RouteConstraint() }); and form to post parameter to that: <% using (Html.BeginRouteForm("SearchEngineWebSearch", FormMethod.Post)) { %> <input name="query" type="text" value="<%: ViewData["Query"]%>" class="search-field" /> <input type="submit" value="Search" class="search-button" /> <%} %> but form rendered with problem. why? thanks in advance ;)

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • apache tomcat sermyadmin deployment error

    - by lepricon123
    I am getting the following errro when i try to start the application Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Set web app root system property: 'sermyadmin-production-2.0.1' = [/usr/local/tomcat6/webapps/sermyadmin/] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing log4j from [/usr/local/tomcat6/webapps/sermyadmin/WEB-INF/classes/log4j.properties] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Mar 23, 2010 7:51:23 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.access.BootstrapException: Error executing bootstraps; nested exception is org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.codehaus.groovy.grails.web.context.GrailsContextLoader.createWebApplicationContext(GrailsContextLoader.java:66) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) ... 2 more Caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at BootStrap$_closure1.doCall(BootStrap.groovy:7) ... 20 more Caused by: org.hibernate.exception.SQLGrammarException: could not execute query ... 21 more Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'opensips.jsec_role' doesn't exist at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)

    Read the article

  • IQueryable<T> Extension Method not working

    - by Micah
    How can i make an extension method that will work like this public static class Extensions<T> { public static IQueryable<T> Sort(this IQueryable<T> query, string sortField, SortDirection direction) { // System.Type dataSourceType = query.GetType(); //System.Type dataItemType = typeof(object); //if (dataSourceType.HasElementType) //{ // dataItemType = dataSourceType.GetElementType(); //} //else if (dataSourceType.IsGenericType) //{ // dataItemType = dataSourceType.GetGenericArguments()[0]; //} //var fieldType = dataItemType.GetProperty(sortField); if (direction == SortDirection.Ascending) return query.OrderBy(s => s.GetType().GetProperty(sortField)); return query.OrderByDescending(s => s.GetType().GetProperty(sortField)); } } Currently that says "Extension methods must be defined in a non-generic static class". How do i do this?

    Read the article

  • SL3/SL4 - Ado.Net Data Services Error during new DataServiceCollection<T>(queryResponse)

    - by Soulhuntre
    Hey all, I have two functions in a SL project (VS2010) that do almost exactly the same thing, yet one throws an error and the other does not. It seems to be related to the projections, but I am unsure about the best way to resolve. The function that works is... public void LoadAllChunksExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsChunk> data = null; DataServiceQuery<CmsChunk> theQuery = _dataservice .CmsChunks .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsChunk> query = asyncResult.AsyncState as DataServiceQuery<CmsChunk>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsChunk> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsChunk>; data = new DataServiceCollection<CmsChunk>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } This compiles and runs as expected. A very, very similar function (shown below) fails... public void LoadAllPagesExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsPage> data = null; DataServiceQuery<CmsPage> theQuery = _dataservice .CmsPages .Expand("CmsChildPages") .Expand("CmsParentPage") .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsPage> query = asyncResult.AsyncState as DataServiceQuery<CmsPage>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsPage> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsPage>; data = new DataServiceCollection<CmsPage>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } Clearly the issue is the Expand projections that involve a self referencing relationship (pages can contain other pages). This is under SL4 or SL3 using ADONETDataServices SL3 Update CTP3. I am open to any work around or pointers to goo information, a Google search for the error results in two hits, neither particularly helpful that I can decipher. The short error is "An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection." The full error is... System.Reflection.TargetInvocationException was caught Message=Exception has been thrown by the target of an invocation. StackTrace: at System.RuntimeMethodHandle.InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at System.Data.Services.Client.ClientType.ClientProperty.SetValue(Object instance, Object value, String propertyName, Boolean allowAdd) at System.Data.Services.Client.AtomMaterializer.ApplyItemsToCollection(AtomEntry entry, ClientProperty property, IEnumerable items, Uri nextLink, ProjectionPlan continuationPlan) at System.Data.Services.Client.AtomMaterializer.ApplyFeedToCollection(AtomEntry entry, ClientProperty property, AtomFeed feed, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.MaterializeResolvedEntry(AtomEntry entry, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.Materialize(AtomEntry entry, Type expectedEntryType, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.DirectMaterializePlan(AtomMaterializer materializer, AtomEntry entry, Type expectedEntryType) at System.Data.Services.Client.AtomMaterializerInvoker.DirectMaterializePlan(Object materializer, Object entry, Type expectedEntryType) at System.Data.Services.Client.ProjectionPlan.Run(AtomMaterializer materializer, AtomEntry entry, Type expectedType) at System.Data.Services.Client.AtomMaterializer.Read() at System.Data.Services.Client.MaterializeAtom.MoveNextInternal() at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Linq.Enumerable.d_b11.MoveNext() at System.Data.Services.Client.DataServiceCollection1.InternalLoadCollection(IEnumerable1 items) at System.Data.Services.Client.DataServiceCollection1.StartTracking(DataServiceContext context, IEnumerable1 items, String entitySet, Func2 entityChanged, Func2 collectionChanged) at System.Data.Services.Client.DataServiceCollection1..ctor(DataServiceContext context, IEnumerable1 items, TrackingMode trackingMode, String entitySetName, Func2 entityChangedCallback, Func2 collectionChangedCallback) at System.Data.Services.Client.DataServiceCollection1..ctor(IEnumerable1 items) at Phinli.Dashboard.Silverlight.Helpers.DataHelper.<>c__DisplayClass44.<>c__DisplayClass46.<LoadAllPagesExpandAll>b__43() InnerException: System.InvalidOperationException Message=An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection. StackTrace: at System.Data.Services.Client.DataServiceCollection1.InsertItem(Int32 index, T item) at System.Collections.ObjectModel.Collection`1.Add(T item) InnerException: Thanks for any help!

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

  • mysql update where some field is max

    - by Syom
    The table videos has the following fields: id | average | name How can i write the query, to update the name of video, which have the max average, or the second by size average!!! i can do that with two queries, by selecting the max(average) from the table, and then update the name, where ite average equal to max or to second value, but i want to do that in one query. i think the query must look like this UPDATE videos SET name = 'something' WHERE average = MAX(average) Any suggestions?

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >