Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 189/446 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Minimum team development sizes

    - by MarkPearl
    Disclaimer - these are observations that I have had, I am not sure if this follows the philosophy of scrum, agile or whatever, but most of these insights were gained while implementing a scrum scenario. Two is a partnership, three starts a team For a while I thought that a team was anything more than one and that scrum could be effective methodology with even two people. I have recently adjusted my thinking to a scrum team being a minimum of three, so what happened to two and what do you call it? For me I consider a group of two people working together a partnership - there is value in having a partnership, but some of the dynamics and value that you get from having a team is lost with a partnership. Avoidance of a one on one confrontation The first dynamic I see missing in a partnership is the team motivation to do better and how this is delivered to individuals that are not performing. Take two highly motivated individuals and put them together and you will typically see them continue to perform. Now take a situation where you have two individuals, one performing and one not and the behaviour is totally different compared to a team of three or more individuals. With two people, if one feels the other is not performing it becomes a one on one confrontation. Most people avoid confrontations and so nothing changes. Compare this to a situation where you have three people in a team, 2 performing and 1 not the dynamic is totally different, it is no longer a personal one on one confrontation but a team concern and people seem more willing to encourage the individual not performing and express their dissatisfaction as a team if they do not improve. Avoiding the effects of Tuckman’s Group Development Theory If you are not familiar with Tuckman’s group development theory give it a read (http://en.wikipedia.org/wiki/Tuckman's_stages_of_group_development) In a nutshell with Tuckman’s theory teams go through these stages of Forming, Storming, Norming & Performing. You want your team to reach and remain in the Performing stage for as long as possible - this is where you get the most value. When you have a partnership of two and you change the individuals in the partnership you basically do a hard reset on the partnership and go back to the beginning of Tuckman’s model each time. This has a major effect on the performance of a team and what they can deliver. What I have seen is that you reduce the effects of Tuckman's theory the more individuals you have in the team (until you hit the maximum team size in which other problems kick in). While you will still experience Tuckman's theory with a team of three, the impact will be greatly reduced compared to two where it is guaranteed every time a change occurs. It's not just in the numbers, it's in the people One final comment - while the actual numbers of a team do play a role, the individuals in the team are even more important - ideally you want to keep individuals working together for an extended period. That doesn't mean that you never change the individuals in a team, or that once someone joins a team they are stuck there - there is value in an individual moving from team to team and getting cross pollination, but the period of time that an individual moves should be in month's or years, not days or weeks. Why? So why is it important to know this? Why is it important to know how a team works and what motivates them? I have been asking myself this question for a while and where I am at right now is this… the aim is to achieve the stage where the sum of the total (team) is greater than the sum of the parts (team members). This is why we form teams and why understanding how they work is a challenge and also extremely stimulating.

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • How do I get public feed from facebook without user authentication on a native/Desktop app?

    - by KronoS
    I'm looking to get publicly available facebook feeds (i.e. Google's facebook page/posts). However instead of forcing the user to sign into their own facebook app, I want to be able to access these posts. I've looked into using "App Access Tokens" however since my application is a native/Desktop app (iOS, Android, WP8/Win 8) I'm not able to do this. Is there a way to get publicly accessible feeds from facebook without user authentication? I'm using the Facebook C# SDK to access facebook. Currently I'm doing the following: dynamic tokenInfo = fb.Get( String.Format( "/oauth/access_token?client_id={0}&client_secret={1}&grant_type=client_credentials", FbController.AppId, FbController.AppSecret)); var appAccessToken = (string) tokenInfo.access_token; fb = new FacebookClient(); dynamic response = fb.Get( String.Format( "/google/posts?access_token={0}", appAccessToken)); Problem is that this only works if my application is set to "web" instead of "native/Desktop". I get the following error when running this code and classified app as native/Desktop. (OAuthException - #15) (#15) Requires session when calling from a desktop app

    Read the article

  • WPF application with MS Access database as a data source

    - by Kay Zed
    I have a Microsoft Access 2010 database. Now, using Visual Studio 2010, I want to create a WPF application and add the database as a data source. The app will have a window with a frame that provides navigation through pages. No problem so far. But: -What is the right way to set up the database in this scenario? Tables only? Or must everything go via queries? (VS2010 talks about views which I assume (?) are queries) -Database data must be updatable and records can be added. Some relationships go through link tables (many-to-many) and there are nullable foreign key relationships. Must I take manual steps to make it work? -While adding the data source VS2010 created an xsd from my Access database. I think the xsd might need further tweaking for the application to work the right way. What if I change my Access database design, I'd have to regenerate the xsd again as well. Is this right, and is it the way it is usually done? OR, should I let the original Access database go and give the application the capability to create new empty databases? -How do you provide controls in a page to step through the records in a table? Is there a special database control? -What is the way (WPF class?) to load records into the data context that displays in a page? (At this level it probably does not matter what type of data source it is.)

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • pylucene: install error

    - by Pradeep
    I am trying to install Pylucene (pylucene-3.3-3-src.tar.gz) on my ubuntu linux 11.10. I have python 2.7.2. I was able to compile JCC (I think) because I didnt see any error when I installed it. When I tried to install Pylucene I get the following error. Can someone help? Thanks. ICU not installed /usr/bin/python -m jcc --shared --jar lucene-java-3.3/lucene/build/lucene-core-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/analyzers/common/lucene-analyzers-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/memory/lucene-memory-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/highlighter/lucene-highlighter-3.3.jar --jar build/jar/extensions.jar --jar lucene-java-3.3/lucene/build/contrib/queries/lucene-queries-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/grouping/lucene-grouping-3.3.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.util.HashMap java.util.HashSet java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.util.regex --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --exclude org.apache.lucene.analysis.cn.smart.AnalyzerProfile --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.3 --module python/collections.py --module python/ICUNormalizer2Filter.py --module python/ICUFoldingFilter.py --module python/ICUTransformFilter.py --files 3 --build /usr/bin/python: No module named jcc make: *** [compile] Error 1 Here is my Makefile configuration which I uncommented PREFIX_PYTHON=/usr ANT=ant PYTHON=$(PREFIX_PYTHON)/bin/python JCC=$(PYTHON) -m jcc --shared NUM_FILES=3

    Read the article

  • WPF 4.0 Custom panel won't show dynamically added controls in VS 2010 Designer

    - by Matt Ruwe
    I have a custom panel control that I'm trying to dynamically add controls within. When I run the application the static and dynamically added controls show up perfectly, but the dynamic controls do not appear within the visual studio designer. Only the controls placed declaratively in the XAML appear. I'm currently adding the dynamic control in the CreateUIElementCollection override, but I've also tried this in the constructor without success. Public Class CustomPanel1 Inherits Panel Public Sub New() End Sub Protected Overrides Function MeasureOverride(ByVal availableSize As System.Windows.Size) As System.Windows.Size Dim returnValue As New Size(0, 0) For Each child As UIElement In Children child.Measure(availableSize) returnValue.Width = Math.Max(returnValue.Width, child.DesiredSize.Width) returnValue.Height = Math.Max(returnValue.Height, child.DesiredSize.Height) Next returnValue.Width = If(Double.IsPositiveInfinity(availableSize.Width), returnValue.Width, availableSize.Width) returnValue.Height = If(Double.IsPositiveInfinity(availableSize.Height), returnValue.Height, availableSize.Height) Return returnValue End Function Protected Overrides Function ArrangeOverride(ByVal finalSize As System.Windows.Size) As System.Windows.Size Dim currentHeight As Integer For Each child As UIElement In Children child.Arrange(New Rect(0, currentHeight, child.DesiredSize.Width, child.DesiredSize.Height)) currentHeight += child.DesiredSize.Height Next Return finalSize End Function Protected Overrides Function CreateUIElementCollection(ByVal logicalParent As System.Windows.FrameworkElement) As System.Windows.Controls.UIElementCollection Dim returnValue As UIElementCollection = MyBase.CreateUIElementCollection(logicalParent) returnValue.Add(New TextBlock With {.Text = "Hello, World!"}) Return returnValue End Function Protected Overrides Sub OnPropertyChanged(ByVal e As System.Windows.DependencyPropertyChangedEventArgs) MyBase.OnPropertyChanged(e) End Sub End Class And my usage of this custom panel <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:CustomPanel" Title="MainWindow" Height="364" Width="434"> <local:CustomPanel1> <CheckBox /> <RadioButton /> </local:CustomPanel1> </Window>

    Read the article

  • Unit Test For NpgsqlCommand With Rhino Mocks

    - by J Pollack
    My unit test keeps getting the following error: "System.InvalidOperationException: The Connection is not open." The Test [TestFixture] public class Test { [Test] public void Test1() { NpgsqlConnection connection = MockRepository.GenerateStub<NpgsqlConnection>(); // Tried to fake the open connection connection.Stub(x => x.State).Return(ConnectionState.Open); connection.Stub(x => x.FullState).Return(ConnectionState.Open); DbQueries queries = new DbQueries(connection); bool procedure = queries.ExecutePreProcedure("201003"); Assert.IsTrue(procedure); } } Code Under Test using System.Data; using Npgsql; public class DbQueries { private readonly NpgsqlConnection _connection; public DbQueries(NpgsqlConnection connection) { _connection = connection; } public bool ExecutePreProcedure(string date) { var command = new NpgsqlCommand("name_of_procedure", _connection); command.CommandType = CommandType.StoredProcedure; NpgsqlParameter parameter = new NpgsqlParameter {DbType = DbType.String, Value = date}; command.Parameters.Add(parameter); command.ExecuteScalar(); return true; } } How would you test the code using Rhino Mocks 3.6? PS. NpgsqlConnection is a connection to a PostgreSQL server.

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • NHibernate L2 Cache - fluent nHibernate configuration

    - by AWC
    I've managed to configure the L2 cache for Get\Load in FHN, but it's not working for queries configured using the ICriteria interface - it doesn't cache the results from these queries. Does anyone know why? The configurations are as follows: ICriteria: return unitOfWork .CurrentSession .CreateCriteria(typeof(Country)) .SetCacheable(true); Entity Mapping: public sealed class CountryMap : ClassMap<Country>, IMap { public CountryMap() { Table("Countries"); Not.LazyLoad(); Cache.ReadWrite().IncludeAll(); Id(x => x.Id); Map(x => x.TwoLetter); Map(x => x.ThreeLetter); Map(x => x.Name); } } And the session factory configuration for the database property: return () => MsSqlConfiguration.MsSql2005 .ConnectionString(BuildConnectionString()) .ShowSql() .Cache(c => c.UseQueryCache() .QueryCacheFactory<StandardQueryCacheFactory>() .ProviderClass(configuration.RepositoryCacheType) .UseMinimalPuts()) .FormatSql() .UseReflectionOptimizer(); Cheers AWC

    Read the article

  • Using FluentValidation with Castle Windsor and Entity Framework 4.0 (POCO) in MVC2

    - by Brian McCord
    This isn't a very simple question, but hopefully someone has run across it. I am trying to get the following things working together: MVC2 FluentValidation Entity Framework 4.0 (POCO) Castle Windsor I've pretty much gotten everything working. I have Castle Windsor implemented and working with the Controllers being served up by the WindsorControllerFactory that is part of MVCContrib. I also have Castle serving up the FluentValidation validators as is described by this article: http://www.jeremyskinner.co.uk/2010/02/22/using-fluentvalidation-with-an-ioc-container/ My problem comes in when I try to use Html.EditorForModel or EditorFor on a view. When I try to do that I get this error message: No component for supporting the service FluentValidation.IValidator`1[[System.Data.Entity.DynamicProxies.State_71C51A42554BA6C3CF05105DA05435AD209602C217FC4C34CA52ACEA2B06B99B, EntityFrameworkDynamicProxies-BrindleyInsurance.BusinessObjects, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] was found This is due to using the POCO generation on Entity Framework 4.0. At runtime, the generated classes get wrapped with a Dynamic Proxy so tracking and lazy loading can happen. Apparently, when using EditorForModel or EditorFor, it tries to ask Windsor to create a validator for the dynamic proxy type instead of the underlying real type. Does anyone know what I can do to solve this issue?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • How to return DropDownList selections dynamically in C#?

    - by salvationishere
    This is probably a simple question but I am developing a web app in C# with DropDownList. Currently it is working for just one DropDownList. But now that I modified the code so that number of DropDownLists that should appear is dynamic, it gives me error; "The name 'ddl' does not exist in the current context." The reason for this error is that there a multiple instances of 'ddl' = number of counters. So how do I instead return more than one 'ddl'? Like what return type should this method have instead? And how do I return these values? Reason I need it dynamic is I need to create one DropDownList for each column in whatever Adventureworks table they select. private DropDownList CreateDropDownLists() { for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + (counter + 1).ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.ID = "DropDownListID 1"; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = true; //Preserves View State info on Postbacks //ddlList.Style["position"] = "absolute"; //ddl.Style["top"] = 80 + "px"; //ddl.Style["left"] = 0 + "px"; dr2.Close(); } return ddl; }

    Read the article

  • How to improve Java perfomance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >