Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 953/1005 | < Previous Page | 949 950 951 952 953 954 955 956 957 958 959 960  | Next Page >

  • To refund or not to refund this client?

    - by Mahalia Samuels
    I'd really appreciate your advice on an ongoing project. I presented my client with a proposal and design samples which he approved, and he paid in full instead of the 50% upfront deposit as I'd given him a generous discount. He was then slow in furnishing me with some of the content, but once we did, he expected the website to be finished immediately which was not possible. Because he needed it done urgently, we agreed to try to get it done about 10 working days after the content was provided, but the developer who was helping me let me down. The next week, I completed the website myself and uploaded it to the server on a Friday afternoon. He then calls and texts me on following Sunday while I'm at church to say it's not online (there was probably a problem with his browser). The next morning, I received an email from him demanding a full refund within two days because he couldn't see the website (even though it was live, and I tested it on multiple browsers, a different computer and my phone), and he called me shouting at me because he couldn't access it. Finally when he was able to access it, he was unhappy with a certain detail regarding the slideshow which I began fixing and which was done the next day. He then referred me to another website and said he wanted it to look similar but not identical to it in terms of the layout. He also now wanted to add more features which were not in the original design. I got a designer to work on a new design which I sent to him for review, which if approved would be completed by 15 October, and he approved it last Thursday. He then called me yesterday to say that he wanted to change the design - he only approved it out of impatience. He now wants the website to be more similar to the other website he referred me to and he wants it done before the 15th! Then, he says to me that other people have done websites for him in three days - website's he's complained to me about for lacking dimension because they were just premium themes, whereas we'd designed and coded from scratch. I'm thinking of finishing the website but refunding him in full (or at least the refundable 50%) less domain registration and other non-refundable amounts, just to avoid further escalation of this matter and having him call me next week and say he wants to change it again. These are the applicable terms and conditions as laid out in the agreement: Total amount due for this project is Amount A. Client shall pay Consultant a deposit of Amount B (50% of total amount due for project) in advance before any work commences on the Project. The balance is due within 7 working days of completion of project. Deposit is non-refundable. Should client opt to host elsewhere, applicable transferral fee of Amount C will apply. Estimated project completion time frame is 14 to 30 days from the date Client furnishes Consultant with Brief and all other required media and data, provided that Client has made payment to secure the project. Consultant will make every effort to meet agreed upon due dates. The Client should be aware that failure to submit required information or materials, or last minute changes and excessive changes may cause subsequent delays. Client delays could result in significant delays in delivery of finished work. Major changes in client input or direction or brief will be charged at normal rates. Any work the Client wishes Consultant to create, which is not specified in the attached Proposal will be considered an additional service. Client agrees to pay Consultant for any additional expenses or additional services not included in the attached quotation and proposal if requested by the Client. Web design credit in the name of the Consultant, and link to Consultant’s website shall be placed on the footer of the final Website. Either party may terminate this Agreement by giving 7 days written notice to the other of such termination. In the event that Work is postponed or terminated at the request of the Client, Consultant shall have the right to bill pro rata at full rates for work completed through the date of that request, while reserving all rights under this Agreement. If additional payment is due, this shall be payable within seven days of the Client's written notification to stop work. In the event of termination, the Client shall also pay any expenses incurred by Consultant and the Consultant shall own all rights to the Work. Advice please?

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • Impressions and Reactions from Alliance 2012

    - by user739873
    Alliance 2012 has come to a conclusion.  What strikes me about every Alliance conference is the amazing amount of collaboration and cooperation I see across higher education in the sharing of best practices around the entire Oracle PeopleSoft software suite, not just the student information system (Oracle’s PeopleSoft Campus Solutions).  In addition to the vibrant U.S. organization, it's gratifying to see the growth in the international attendance again this year, with an EMEA HEUG organizing to complement the existing groups in the Netherlands, South Africa, and the U.K.  Their first meeting is planned for London in October, and I suspect they'll be surprised at the amount of interest and attendance. In my discussions with higher education IT and functional leadership at Alliance there were a number of instances where concern was expressed about Oracle's commitment to higher education as an industry, primarily because of a lack of perceived innovation in the applications that Oracle develops for this market. Here I think perception and reality are far apart, and I'd like to explain why I believe this to be true. First let me start with what I think drives this perception. Predominately it's in two areas. The first area is the user interface, both for students and faculty that interact with the system as "customers", and for those employees of the institution (faculty, staff, and sometimes students as well) that use the system in some kind of administrative role. Because the UI hasn't changed all that much from the PeopleSoft days, individuals perceive this as a dead product with little innovation and therefore Oracle isn't investing. The second area is around the integration of the higher education suite of applications (PeopleSoft Campus Solutions) and the rest of the Oracle software assets. Whether grown organically or acquired, there is an impressive array of middleware and other software products that could be leveraged much more significantly by the higher education applications than is currently the case today. This is also perceived as lack of investment. Let me address these two points.  First the UI.  More is being done here than ever before, and the PAG and other groups where this was discussed at Alliance 2012 were more numerous than I've seen in any past meeting. Whether it's Oracle development leveraging web services or some extremely early but very promising work leveraging the recent Endeca acquisition (see some cool examples here) there are a lot of resources aimed at this issue.  There are also some amazing prototypes being developed by our UX (user experience team) that will eventually make their way into the higher education applications realm - they had an impressive setup at Alliance.  Hopefully many of you that attended found this group. If not, the senior leader for that team Jeremy Ashley will be a significant contributor of content to our summer Industry Strategy Council meeting in Washington in June. In the area of integration with other elements of the Oracle stack, this is also an area of focus for the company and my team.  We're making this a priority especially in the areas of identity management and security, leveraging WebCenter more effectively for content, imaging, and mobility, and driving towards the ultimate objective of WebLogic Suite as our platform for SOA, links to learning management systems (SAIP), and content. There is also much work around business intelligence centering on OBI applications. But at the end of the day we get enormous value from the HEUG (higher education user group) and the various subgroups formed as a part of this community that help us align and prioritize our investments, whether it's around better integration with other Oracle products or integration with partner offerings.  It's one of the healthiest, mutually beneficial relationships between customers and an Education IT concern that exists on the globe. And I can't avoid mentioning that this kind of relationship between higher education and the corporate IT community that can truly address the problems of efficiency and effectiveness, institutional excellence (which starts with IT) and student success.  It's not (in my opinion) going to be solved through community source - cost and complexity only increase in that model and in the end higher education doesn't ultimately focus on core competencies: educating, developing, and researching.  While I agree with some of what Michael A. McRobbie wrote in his EDUCAUSE Review article (Information Technology: A View from Both Sides of the President’s Desk), I take strong issue with his assertion that the "the IT marketplace is just the opposite of long-term stability...."  Sure there has been healthy, creative destruction in the past 2-3 decades, but this has had the effect of, in the aggregate, benefiting education with greater efficiency, more innovation and increased stability as larger, more financially secure firms acquire and develop integrated solutions. Cole

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • CS, SE, HCI, Information Science, Please recommendation for further education of the former performing art manager seeking career in IT industries? [on hold]

    - by Baek Seungjoo
    IT specialists there J Thank you very much for your collective efforts here, and I got huge help reading your professional comments and advices on each questions I have searched so far! This time, I would like to ask for your practical advices or recommendation on what I am struggling on at this moment. I am currently seeking higher education for my career transition from performing art manager and director to “IT software and/or service development and management specialist”. However, as this field is quite new to me, and there are lots of different work positions, I have no idea which grad major I better pursue in order to get qualification. Of course I know this question could sounds wired as it is kind of personal choice. But my lack of understanding on how IT software companies work in general, your practical and experience-based advice will be great help to me, who spent more than two months of self-research on net. OK. Before my question, here is my plan and history, which are quite different from those currently in IT industry I think… 1) Target Firstly, get career transition into IT service or products companies and get experiences. Eventually, pursue IT entrepreneurship in combination with my arts and cultural production and business expertise. 2) Background Career: performing arts director and manager in theatre-based scale opera and musical Art education in youth BA in literature and Chinese studies (Art & Humanities) MA in Cultural & Creative Industries (Art & Humanities) – dissertation with focus on digital prosumption and the lived experience of the prosumer. (a qualitative research on the agents in the digital world) 2) Personally Huge interest in IT hardware and software, and their trend. Skills to build up, repair, tune PCs -of course this is no more than personal hobby, but shows my interests in this field. 4) Problem Encounter a question “So, what do you think you can contribute practically in this position”. This question turn me down everytime I go through job interviews, and I decided more education in the relevant area. Here are my questions. 1) In terms of work positions in IT software companies, I wonder if I can put the comparison of what “Artists” is to “Arts Manager or Director” is what “Developer” is to “Product Manager”. (Of course, this stereotypical division of Artist-Art Manager is out of sense because the domain overlaps to some extent, and is blurring at least in my field, and they are in different contexts, but just speaking easily.) Normally, artist comes with special arts educations, and they live in their own world of artistic inspiration and creation, and they feel alive in practice and on stages. Meanwhile, from the point of staging and managing productions, the role of art manager is critical as well. Our role cares how the production appeals to the audience in effective way, how to make profit and future sustainable management through that, how to set up future strategy in consideration of the external conditions such as political and social circumstances, audience trend and level, other production trends from on-going and historical perspectives, how and what the production make voice to the society from political, economic, humanitarian stances. So, we need keen eyes on economic, political, and societal environment, have to understand human-being and their desires, must know how to make presentation and attract investors, must have sense in managing and fighting over the limited financial resource, how to extend networking and so on. It is common that the two agents create productions in collaboration (normally not in that ideal way but in conflict and fight though J ). So, we need to know each other’s expertise to some extent, for better production. What are the work positions in IT software industries equivalent to the role of “art manager” in performing arts? From my view, considering developers come with special education in the world of computer science, software engineering, or others (self-education sometimes), and they express themselves with the arts of coding, computer languages on the black screen, and make sort of their artistic production online to the audience, I guess there might be someone who collaborate with developers in creating, managing, and launching IT services or products. 2) Which education among CS, SE, HCI, Information Science, is needed for those seeking such work position? Especially for person like me. (At this moment, Information Science has the highest possibility to get in, since I lack Calculus and Math in undergrad educaiton. But please let me know irrespective of this concern, I think there are ways to back it up if CS or SE education needed in my case) 3) Which field between Information Science and HCI can be more practical background regarding job hungting? And which of them have more demands in job market? AS I checked, HCI is more close to CS than IS in its focus of study area. Thank you very much for your patience reading such a long inquiry, and I appreciate to your efforts in advance. Have a nice day in this beautiful summer.

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • With Its MySQL Database-as-a-Service CERN Empowers Scientists

    - by Bertrand Matthelié
    The European Organization for Nuclear Research (CERN) is one of the world’s largest and most respected centers for scientific research. Founded in 1954 and located near Geneva on the Franco-Swiss border, CERN was one of Europe’s first joint ventures. Today, it has 20 member states. The organization uses the world’s largest and most complex scientific instruments to study fundamental particles and the origin of the universe. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Challenges Better support the scientists associated with a CERN research program who selected MySQL as their database. Empower users, enabling them to be as self-reliant as possible. Minimize complexity and costs for the CERN IT department to support the growing number of MySQL deployments. Solution Delivered a MySQL Database-as-a-Service offering to the CERN employees and the scientists associated with the organization. Allowed researchers selecting MySQL for their project to get access to a database instance hosted by the CERN IT department, either from the start or once their application has become critical. Implemented the service using Oracle’s server virtualization software, Oracle VM, for increased flexibility and reduced costs. Empowered users with a self-service approach, providing them with tools to manage MySQL themselves while handling backups and other basic database administration tasks for them. Enabled scientists to rely on MySQL with increased reliability, security and manageability while reducing complexity and minimizing costs. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} "The Cloud model has allowed us to deliver a self-service platform to our MySQL users, empowering them while minimizing costs for CERN." Tony Cass, Database Services Group Leader, IT department, CERN.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Exiting a reboot loop

    - by user12617035
    If you're in a situation where the system is panic'ing during boot, you can use # boot net -s to regain control of your system. In my case, I'd added some diagnostic code to a (PCI) driver (that is used to boot the root filesystem). There was a bug in the driver, and each time during boot, the bug occurred, and so caused the system to panic: ... 000000000180b950 genunix:vfs_mountroot+60 (800, 200, 0, 185d400, 1883000, 18aec00) %l0-3: 0000000000001770 0000000000000640 0000000001814000 00000000000008fc %l4-7: 0000000001833c00 00000000018b1000 0000000000000600 0000000000000200 000000000180ba10 genunix:main+98 (18141a0, 1013800, 18362c0, 18ab800, 180e000, 1814000) %l0-3: 0000000070002000 0000000000000001 000000000180c000 000000000180e000 %l4-7: 0000000000000001 0000000001074800 0000000000000060 0000000000000000 skipping system dump - no dump device configured rebooting... If you're logged in via the console, you can send a BREAK sequence in order to gain control of the firmware's (OBP's) prompt. Enter Ctrl-Shift-[ in order to get the TELNET prompt. Once telnet has control, enter this: telnet> send brk You'll be presented with OBP's prompt: ok You then enter the following in order to boot into single-user mode via the network: ok boot net -s Note that booting from the network under Solaris will implicitly cause the system to be INSTALLED with whatever software had last been configured to be installed. However, we are using boot net -s as a "handle" with which to get at the Solaris prompt. Once at that prompt, we can perform actions as root that will let us back out our buggy driver (ok... MY buggy driver :-)) ...and replace it with the original, non-buggy driver. Entering the boot command caused the following output, as well as left us at the Solaris prompt (in single-user-mode): Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #53463393. Ethernet address 0:3:ba:2f:c9:61, Host ID: 832fc961. Rebooting with command: boot net -s Boot device: /pci@1f,700000/network@2 File and args: -s 1000 Mbps FDX Link up Timeout waiting for ARP/RARP packet Timeout waiting for ARP/RARP packet 4000 1000 Mbps FDX Link up Requesting Internet address for 0:3:ba:2f:c9:61 SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface bge0... Configured interface bge0 Requesting System Maintenance Mode SINGLE USER MODE # Our goal is to now move to the directory containing the buggy driver and replace it with the original driver (that we had saved away before ever loading our buggy driver! :-) However, since we booted from the network, the root filesystem ("/") is NOT mounted on one of our local disks. It is mounted on an NFS filesystem exported by our install server. To verify this, enter the following command: # mount | head -1 / on my-server:/export/install/media/s10u2/solarisdvd.s10s_u2dvd/latest/Solaris_10/Tools/Boot remote/read/write/setuid/devices/dev=4ac0001 on Wed Dec 31 16:00:00 1969 As a result, we have to create a temporary mount point and then mount the local disk onto that mount point: # mkdir /tmp/mnt # mount /dev/dsk/c0t0d0s0 /tmp/mnt Note that your system will not necessarily have had its root filesystem on "c0t0d0s0". This is something that you should also have recorded before you ever loaded your.. er... "my" buggy driver! :-) One can find the local disk mounted under the root filesystem by entering: # df -k / Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 76703839 4035535 71901266 6% / To continue with our example, we can now move to the directory of buggy-driver in order to replace it with the original driver. Note that /tmp/mnt is prefixed to the path of where we'd "normally" find the driver: # cd /tmp/mnt/platform/sun4u/kernel/drv/sparcv9 # ls -l pci\* -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch.aar -rwxr-xr-x 1 root sys 211616 Jun 8 2006 pcisch.orig # cp -p pcisch.orig pcisch We can now synchronize any in-memory filesystem data structures with those on disk... and then reboot. The system will then boot correctly... as expected: # sync;sync # reboot syncing file systems... done Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #xxxxxxxx. Ethernet address 0:3:ba:2f:c9:61, Host ID: yyyyyyyy. Rebooting with command: boot Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-host NIS domain name is my-campus.Central.Sun.COM my-host console login: ...so that's how it's done! Of course, the easier way is to never write a buggy-driver... but.. then.. we all "have an eraser on the end of each of our pencils"... don't we ? :-) "...thank you... and good night..."

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Error "exit signal Bus error (7)" How to continue after making a backtrace?

    - by Mikel
    I have a Centos Server in 1and1 with Apache, Magento, MagentoBooster and Xcache installed. The server usually (1-8 times per day) prints this error "exit signal Bus error (7)" and sometimes this causes Apache not to respond. I have made a backtrace with GDB, but I don't know how to continue. gdb /usr/sbin/httpd core.XXXX --batch --quiet -ex "thread apply all bt full" backtrace.log The backtrace: [New Thread 15312] [Thread debugging using libthread_db enabled] Core was generated by `/usr/sbin/httpd'. Program terminated with signal 7, Bus error. #0 0x00002abcf6c7324e in memcpy () from /lib64/libc.so.6 Thread 1 (Thread 0x2abcf8c72300 (LWP 15312)): #0 0x00002abcf6c7324e in memcpy () from /lib64/libc.so.6 No symbol table info available. #1 0x00002abd02e6b9c7 in ?? () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #2 0x00002abd02ed4d47 in _zval_dup () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #3 0x00002abd02ecdffb in ?? () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #4 0x00002abd02c32636 in xc_compile_file (h=0x7fffc3e7e4f0, type=2) at /opt/xcache-1.3.2-rc1/xcache.c:1060 __orig_bailout = 0x7fffc3e88f10 __bailout = {{__jmpbuf = {46991244125792, 3379122525071325456, 46991369192208, 140736480142576, 140736480142656, 46991244125792, 3379207471940512272, 3379122524988693332}, __mask_was_saved = 0, __saved_mask = {__val = {46991369228841, 46991369206800, 46991369195208, 46991382361728, 46991369196536, 46991369206984, 46991369210744, 0, 46991240130544, 140733193388033, 0, 140736480142296, 46991240361284, 46991369207528, 46991369232128, 3}}}} sandbox = {alloc = 0, filename = 0x2abd078dd0e0 "/var/www/vhosts/DOMAIN/httpdocs/var/ait_rewrite/67b58abff9e6bd7b400bb2fc1903bf2f.php", orig_included_files = {nTableSize = 256, nTableMask = 255, nNumOfElements = 191, nNextFreeElement = 0, pInternalPointer = 0x2abcf4da53d0, pListHead = 0x2abcf4da53d0, pListTail = 0x2abd07dfeb28, arBuckets = 0x2abd0896d690, pDestructor = 0, persistent = 0 '\000', nApplyCount = 0 '\000', bApplyProtection = 1 '\001'}, tmp_included_files = 0x2abd0069e830, orig_zend_constants = 0x2abd0d630b60, tmp_zend_constants = {nTableSize = 2048, nTableMask = 2047, nNumOfElements = 1559, nNextFreeElement = 0, pInternalPointer = 0x2abd08283760, pListHead = 0x2abd08283760, pListTail = 0x2abd08320810, arBuckets = 0x2abd08302aa0, pDestructor = 0x2abd02c34850 <xc_free_zend_constant>, persistent = 1 '\001', nApplyCount = 0 '\000', bApplyProtection = 1 '\001'}, orig_function_table = 0x2abd0d61f340, orig_class_table = 0x2abd0d61f2b0, orig_auto_globals = 0x2abd0d618910, tmp_function_table = {nTableSize = 2048, nTableMask = 2047, nNumOfElements = 1555, nNextFreeElement = 0, pInternalPointer = 0x2abd08320ce0, pListHead = 0x2abd08320ce0, pListTail = 0x2abd0933fe60, arBuckets = 0x2abd079c8500, pDestructor = 0x2abd0033e1d0 <zend_function_dtor>, persistent = 1 '\001', nApplyCount = 0 '\000', bApplyProtection = 0 '\000'}, tmp_class_table = { nTableSize = 16, nTableMask = 15, nNumOfElements = 0, nNextFreeElement = 0, pInternalPointer = 0x0, pListHead = 0x0, pListTail = 0x0, arBuckets = 0x2abd079dbf60, pDestructor = 0x2abd0033dcf0 <destroy_zend_class>, persistent = 1 '\001', nApplyCount = 0 '\000', bApplyProtection = 0 '\000'}, tmp_auto_globals = {nTableSize = 16, nTableMask = 15, nNumOfElements = 9, nNextFreeElement = 0, pInternalPointer = 0x2abd0933ffc0, pListHead = 0x2abd0933ffc0, pListTail = 0x2abd093403e0, arBuckets = 0x2abd09340470, pDestructor = 0, persistent = 1 '\001', nApplyCount = 0 '\000', bApplyProtection = 0 '\000'}, tmp_internal_constant_tail = 0x2abd08320810, tmp_internal_function_tail = 0x2abd0933fe60, tmp_internal_class_tail = 0x0, orig_user_error_handler_error_reporting = 8191} op_array = <value optimized out> xce = {type = XC_TYPE_PHP, hvalue = 2460, next = 0x2abd0d939e60, cache = 0x2abd0d90b038, size = 10, refcount = 46991369191320, hits = 4, ctime = 46991362335072, atime = 8, dtime = 46991240673248, ttl = 46991369192096, name = {lval = 46991363920096, dval = 2.3216818564143281e-310, str = { val = 0x2abd078dd0e0 "/var/www/vhosts/DOMAIN/httpdocs/var/ait_rewrite/67b58abff9e6bd7b400bb2fc1903bf2f.php", len = 107}, ht = 0x2abd078dd0e0, obj = {handle = 126734560, handlers = 0x2abd0000006b}}, data = {php = 0x7fffc3e7e440, var = 0x7fffc3e7e440}, have_references = 0 '\000'} stored_xce = 0x0 php = {sourcesize = 8947, device = 64769, inode = 9907963, mtime = 1353055102, op_array = 0x2abd00344004, constinfo_cnt = 1, constinfos = 0x0, funcinfo_cnt = 132120232, funcinfos = 0x8, classinfo_cnt = 8, classinfos = 0x0, have_early_binding = 168 '\250', autoglobal_cnt = 10941, autoglobals = 0x8} cache = 0x2abd0d90b038 catched = <value optimized out> filename = <value optimized out> opened_path_buffer = "\000\000\000\000\000\000\000\000I\032\065\000\275*\000\000\300\347i\000\275*\000\000\001\000\000\000\000\000\000\000\377\377\377\377\000\000\000\000\000\000\000\000\003\000\000\000[\337\227\337,\002pr\n\000\000\000\000\000\000\000P\323\347\303\377\177\000\000\357\367\220\b\275*\000\000\243\002M\a\275*\000\000\005", '\000' <repeats 15 times>"\357, \367\220\b\275*\000\000\244\002M\a\275*\000\000Du0\000\275*\000\000\b\000\000\000\000\000\000\000\232b=\365\000\000\000\000\002\000\000\000\377\177\000\000\005\000\000\000\275*\000\000\220\322\347\303\377\177\000\000\000\020\000\000\000\000\000\000,\324\347\303\377\177\000\000`\321\347\303^", '\000' <repeats 27 times>, "\f\000\000 \001", '\000' <repeats 11 times>"\260, \322\347\303", '\000' <repeats 12 times>, "P\323\347\303\377\177\000\000x\225\332\364\004\000\000\000\001\000\000\000\031\000\000\000\300\331\336\a\275*\000\000\300\331\336\a\275*\000\000"... old_constinfo_cnt = 1559 old_funcinfo_cnt = 1555 old_classinfo_cnt = 0 #5 0x00002abd003290bf in compile_filename () from /etc/httpd/modules/libphp5.so No symbol table info available. #6 0x00002abd00398ded in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #7 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #8 0x00002abd0033b796 in zend_call_function () from /etc/httpd/modules/libphp5.so No symbol table info available. #9 0x00002abd0035b1e1 in zend_call_method () from /etc/httpd/modules/libphp5.so No symbol table info available. #10 0x00002abd00273bf4 in zif_spl_autoload_call () from /etc/httpd/modules/libphp5.so No symbol table info available. #11 0x00002abd0033b945 in zend_call_function () from /etc/httpd/modules/libphp5.so No symbol table info available. #12 0x00002abd0033c51e in zend_lookup_class_ex () from /etc/httpd/modules/libphp5.so No symbol table info available. #13 0x00002abd0033c728 in zend_fetch_class () from /etc/httpd/modules/libphp5.so No symbol table info available. #14 0x00002abd003a61ab in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #15 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #16 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #17 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #18 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #19 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #20 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #21 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #22 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #23 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #24 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #25 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #26 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #27 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #28 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #29 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #30 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #31 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #32 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #33 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #34 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #35 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #36 0x00002abd00366b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #37 0x00002abd0036628c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #38 0x00002abd00346943 in zend_execute_scripts () from /etc/httpd/modules/libphp5.so No symbol table info available. #39 0x00002abd00306898 in php_execute_script () from /etc/httpd/modules/libphp5.so No symbol table info available. #40 0x00002abd003cb09d in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #41 0x00002abcf4cfca0a in ap_run_handler () No symbol table info available. #42 0x00002abcf4cffe98 in ap_invoke_handler () No symbol table info available. #43 0x00002abcf4d0a74a in ap_internal_redirect () No symbol table info available. #44 0x00002abcfdb45bf0 in ap_make_dirstr_parent () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #45 0x00002abcf4cfca0a in ap_run_handler () No symbol table info available. #46 0x00002abcf4cffe98 in ap_invoke_handler () No symbol table info available. #47 0x00002abcf4d0a8f8 in ap_process_request () No symbol table info available. #48 0x00002abcf4d07b30 in ?? () No symbol table info available. #49 0x00002abcf4d03c92 in ap_run_process_connection () No symbol table info available. #50 0x00002abcf4d0e7a9 in ?? () No symbol table info available. #51 0x00002abcf4d0ea3a in ?? () No symbol table info available. #52 0x00002abcf4d0f29d in ap_mpm_run () No symbol table info available. #53 0x00002abcf4ce9e48 in main () No symbol table info available. Can anyone help me? ADITIONAL INFO php -v PHP 5.2.10 (cli) (built: Nov 13 2009 11:44:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with XCache v1.3.2-rc1, Copyright (c) 2005-2011, by mOo with the ionCube PHP Loader v3.1.28, Copyright (c) 2002-2007, by ionCube Ltd. httpd -v Server version: Apache/2.2.3 Server built: May 4 2011 06:51:15 Apache modules: core prefork http_core mod_so mod_auth_basic mod_auth_digest mod_authn_file mod_authn_alias mod_authn_anon mod_authn_dbm mod_authn_default mod_authz_host mod_authz_user mod_authz_owner mod_authz_groupfile mod_authz_dbm mod_authz_default util_ldap mod_authnz_ldap mod_include mod_log_config mod_logio mod_env mod_ext_filter mod_mime_magic mod_expires mod_deflate mod_headers mod_usertrack mod_setenvif mod_mime mod_dav mod_status mod_autoindex mod_info mod_dav_fs mod_vhost_alias mod_negotiation mod_dir mod_actions mod_speling mod_userdir mod_alias mod_rewrite mod_proxy mod_proxy_balancer mod_proxy_ftp mod_proxy_http mod_proxy_connect mod_cache mod_suexec mod_disk_cache mod_file_cache mod_mem_cache mod_cgi mod_version mod_fcgid mod_perl mod_php5 mod_proxy_ajp mod_python mod_ssl Aditional modules: dbase ionCube Loader sysvsem sysvshm EDIT (November 18) I have disabled some suspicious modules and the error persist. The new backtrace: [New Thread 12403] [Thread debugging using libthread_db enabled] Core was generated by `/usr/sbin/httpd'. Program terminated with signal 7, Bus error. #0 0x00002b0c5754a24e in memcpy () from /lib64/libc.so.6 Thread 1 (Thread 0x2b0c59549300 (LWP 12403)): #0 0x00002b0c5754a24e in memcpy () from /lib64/libc.so.6 No symbol table info available. #1 0x00002b0c558519c7 in ?? () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #2 0x00002b0c558bad47 in _zval_dup () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #3 0x00002b0c558b3ffb in ?? () from /usr/lib64/php/modules//php_ioncube_loader_lin_5.2_x86_64.so No symbol table info available. #4 0x00002b0c60d650bf in compile_filename () from /etc/httpd/modules/libphp5.so No symbol table info available. #5 0x00002b0c60dd4ded in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #6 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #7 0x00002b0c60d77796 in zend_call_function () from /etc/httpd/modules/libphp5.so No symbol table info available. #8 0x00002b0c60d971e1 in zend_call_method () from /etc/httpd/modules/libphp5.so No symbol table info available. #9 0x00002b0c60cafbf4 in zif_spl_autoload_call () from /etc/httpd/modules/libphp5.so No symbol table info available. #10 0x00002b0c60d77945 in zend_call_function () from /etc/httpd/modules/libphp5.so No symbol table info available. #11 0x00002b0c60d7851e in zend_lookup_class_ex () from /etc/httpd/modules/libphp5.so No symbol table info available. #12 0x00002b0c60d78728 in zend_fetch_class () from /etc/httpd/modules/libphp5.so No symbol table info available. #13 0x00002b0c60de21ab in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #14 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #15 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #16 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #17 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #18 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #19 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #20 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #21 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #22 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #23 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #24 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #25 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #26 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #27 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #28 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #29 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #30 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #31 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #32 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #33 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #34 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #35 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #36 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #37 0x00002b0c60da2b91 in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #38 0x00002b0c60da228c in execute () from /etc/httpd/modules/libphp5.so No symbol table info available. #39 0x00002b0c60d82943 in zend_execute_scripts () from /etc/httpd/modules/libphp5.so No symbol table info available. #40 0x00002b0c60d42898 in php_execute_script () from /etc/httpd/modules/libphp5.so No symbol table info available. #41 0x00002b0c60e0709d in ?? () from /etc/httpd/modules/libphp5.so No symbol table info available. #42 0x00002b0c555d3a0a in ap_run_handler () No symbol table info available. #43 0x00002b0c555d6e98 in ap_invoke_handler () No symbol table info available. #44 0x00002b0c555e174a in ap_internal_redirect () No symbol table info available. #45 0x00002b0c5e41cbf0 in ap_make_dirstr_parent () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #46 0x00002b0c555d3a0a in ap_run_handler () No symbol table info available. #47 0x00002b0c555d6e98 in ap_invoke_handler () No symbol table info available. #48 0x00002b0c555e18f8 in ap_process_request () No symbol table info available. #49 0x00002b0c555deb30 in ?? () No symbol table info available. #50 0x00002b0c555dac92 in ap_run_process_connection () No symbol table info available. #51 0x00002b0c555e57a9 in ?? () No symbol table info available. #52 0x00002b0c555e5a3a in ?? () No symbol table info available. #53 0x00002b0c555e629d in ap_mpm_run () No symbol table info available. #54 0x00002b0c555c0e48 in main () No symbol table info available.

    Read the article

  • Segfaulting Java process

    - by zenmonkey
    I've a java process that is working on some large data set in memory. I've seen it crash with a SIGSEGV signal sometimes, so i was wondering some potential causes and fixes could do. Caues: - JVM bug - Native library bug (e.g pthreads etc) - JNI bug in user code Fixes: - Upgrade to new JVM In my particular case, this is the output form the log file (pruned) A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00002aaaaacd1b94, pid=32116, tid=1086544208 # JRE version: 6.0_14-b08 Java VM: Java HotSpot(TM) 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 ) Problematic frame: C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00002aacaad41000): WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00002aabc40008c0 Registers: RAX=0x0000000000000000, RBX=0x0000000000000000, RCX=0x0000000000000000, RDX=0x0000000000000002 RSP=0x0000000040c34cc0, RBP=0x0000000040c34d80, RSI=0x0000000000000001, RDI=0x00002aabc40008c0 R8 =0x00002aacaad42528, R9 =0x0000000000000000, R10=0x0000000040c34cd8, R11=0x0000000000000202 R12=0x0000000000000001, R13=0x0000000040c34d40, R14=0xffffffffffffff92, R15=0x00002aacaad42550 RIP=0x00002aaaaacd1b94, EFL=0x0000000000010246, CSGSFS=0x000000000000e033, ERR=0x0000000000000006 TRAPNO=0x000000000000000e Top of Stack: (sp=0x0000000040c34cc0) 0x0000000040c34cc0: 0000000000000000 00002aabc40008c0 0x0000000040c34cd0: 00002aacaad42528 0000000000000000 0x0000000040c34ce0: 0000000002fae0e0 0000000000000000 0x0000000040c34cf0: 00002aaaaacd1750 0000000040c34cc0 0x0000000040c34d00: 00002aacaad42528 0000000000000000 0x0000000040c34d10: 00002aacaad42528 00002aacaad42500 0x0000000040c34d20: 0000000000000032 00002aaaabadf876 0x0000000040c34d30: fffffffdaad40e80 0000000040c34d40 0x0000000040c34d40: 000000004bbb7166 0000000015f07098 0x0000000040c34d50: 0000000040c34d80 00138cd32df59cce 0x0000000040c34d60: 431bde82d7b634db 00002aacaad429c0 0x0000000040c34d70: 0000000000000032 00002aacaad429c0 0x0000000040c34d80: 0000000040c34e00 00002aaaabadda6d 0x0000000040c34d90: 0000000040c34da0 00002aacaad42500 0x0000000040c34da0: 00002aacaad429c0 00002aaa00000002 0x0000000040c34db0: 0000000000000001 0000000000000002 0x0000000040c34dc0: 0000000040c34dd0 00002aaaabb6f613 0x0000000040c34dd0: 0000000040c34e00 00002aacaad41000 0x0000000040c34de0: 0000000000000032 00002aacaad429c0 0x0000000040c34df0: 00002aacaad41000 0000000000001000 0x0000000040c34e00: 0000000040c34e60 00002aaaabbc39fb 0x0000000040c34e10: 0000000040c34e40 00002aaaabab868f 0x0000000040c34e20: 00002aacaad41000 00002aacaad42aa0 0x0000000040c34e30: 00002aacaad42aa0 00002aaaabe10630 0x0000000040c34e40: 00002aaaabe10630 00002aacaad42aa0 0x0000000040c34e50: 00002aacaad429c0 00002aacaad41000 0x0000000040c34e60: 0000000040c35130 00002aaaabadff9f 0x0000000040c34e70: 0000000000000000 0000000000000000 0x0000000040c34e80: 0000000000000000 0000000000000000 0x0000000040c34e90: 0000000000000000 0000000000000000 0x0000000040c34ea0: 0000000000000000 0000000000000000 0x0000000040c34eb0: 0000000000000000 0000000000000000 Instructions: (pc=0x00002aaaaacd1b94) 0x00002aaaaacd1b84: 88 22 00 00 48 8b 7c 24 08 be 01 00 00 00 31 c0 0x00002aaaaacd1b94: f0 0f b1 37 0f 85 e8 00 00 00 8b 57 2c 48 8b 47 Stack: [0x0000000040b35000,0x0000000040c36000], sp=0x0000000040c34cc0, free space=1023k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 V [libjvm.so+0x594a6d] V [libjvm.so+0x67a9fb] V [libjvm.so+0x596f9f] --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x00002aacaad3f000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=32140, stack(0x0000000040a34000,0x0000000040b35000)] 0x00002aacaad3c000 JavaThread "CompilerThread1" daemon [_thread_blocked, id=32139, stack(0x0000000040933000,0x0000000040a34000)] 0x00002aacaad37800 JavaThread "CompilerThread0" daemon [_thread_blocked, id=32138, stack(0x0000000040832000,0x0000000040933000)] 0x00002aacaad36800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=32137, stack(0x0000000040731000,0x0000000040832000)] 0x00002aacaab7d800 JavaThread "Finalizer" daemon [_thread_blocked, id=32136, stack(0x0000000040630000,0x0000000040731000)] 0x00002aacaab7b800 JavaThread "Reference Handler" daemon [_thread_blocked, id=32135, stack(0x000000004052f000,0x0000000040630000)] 0x0000000040115800 JavaThread "main" [_thread_blocked, id=32117, stack(0x000000004012b000,0x000000004022c000)] Other Threads: 0x00002aacaab75000 VMThread [stack: 0x000000004042e000,0x000000004052f000] [id=32134] =0x00002aacaad41000 WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] VM state:at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event]) [0x0000000040112e80] Threads_lock - owner thread: 0x00002aacaab75000 [0x0000000040113380] Heap_lock - owner thread: 0x0000000040115800 Heap PSYoungGen total 1854528K, used 1029248K [0x00002aac025a0000, 0x00002aaca8340000, 0x00002aaca9040000) eden space 1029248K, 100% used [0x00002aac025a0000,0x00002aac412c0000,0x00002aac412c0000) from space 825280K, 0% used [0x00002aac412c0000,0x00002aac412c0000,0x00002aac738b0000) to space 812800K, 0% used [0x00002aac76980000,0x00002aac76980000,0x00002aaca8340000) PSOldGen total 4423680K, used 4423651K [0x00002aaab5040000, 0x00002aabc3040000, 0x00002aac025a0000) object space 4423680K, 99% used [0x00002aaab5040000,0x00002aabc3038fe8,0x00002aabc3040000) PSPermGen total 21248K, used 5848K [0x00002aaaafc40000, 0x00002aaab1100000, 0x00002aaab5040000) object space 21248K, 27% used [0x00002aaaafc40000,0x00002aaab01f61f0,0x00002aaab1100000) Dynamic libraries: 40000000-40009000 r-xp 00000000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 40108000-4010a000 rwxp 00008000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 4010a000-4012b000 rwxp 4010a000 00:00 0 [heap] 4012b000-4012e000 ---p 4012b000 00:00 0 4012e000-4022c000 rwxp 4012e000 00:00 0 4022c000-4022d000 ---p 4022c000 00:00 0 4022d000-4032d000 rwxp 4022d000 00:00 0 4032d000-4032e000 ---p 4032d000 00:00 0 4032e000-4042e000 rwxp 4032e000 00:00 0 4042e000-4042f000 ---p 4042e000 00:00 0 4042f000-4052f000 rwxp 4042f000 00:00 0 4052f000-40532000 ---p 4052f000 00:00 0 40532000-40630000 rwxp 40532000 00:00 0 40630000-40633000 ---p 40630000 00:00 0 40633000-40731000 rwxp 40633000 00:00 0 40731000-40734000 ---p 40731000 00:00 0 40734000-40832000 rwxp 40734000 00:00 0 40832000-40835000 ---p 40832000 00:00 0 40835000-40933000 rwxp 40835000 00:00 0 40933000-40936000 ---p 40933000 00:00 0 40936000-40a34000 rwxp 40936000 00:00 0 40a34000-40a37000 ---p 40a34000 00:00 0 40a37000-40b35000 rwxp 40a37000 00:00 0 40b35000-40b36000 ---p 40b35000 00:00 0 40b36000-40c36000 rwxp 40b36000 00:00 0 2aaaaaaab000-2aaaaaac6000 r-xp 00000000 08:01 49198 /lib64/ld-2.7.so 2aaaaaac6000-2aaaaaac7000 rwxp 2aaaaaac6000 00:00 0 2aaaaaac7000-2aaaaaad0000 r-xs 0006d000 08:10 29851669 /mnt/home/jatten/workspace/common/build/lib/common.jar 2aaaaaad2000-2aaaaaad3000 rwxp 2aaaaaad2000 00:00 0 2aaaaaad3000-2aaaaaae0000 r-xp 00000000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaaae0000-2aaaaabdf000 ---p 0000d000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabdf000-2aaaaabe2000 rwxp 0000c000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabe2000-2aaaaac0a000 rwxp 2aaaaabe2000 00:00 0 2aaaaac0a000-2aaaaac0f000 r-xs 0003a000 08:10 30326840 /mnt/home/jatten/workspace/common_ml20010405/build/lib/common_ml.jar 2aaaaac0f000-2aaaaac12000 r-xs 00020000 08:10 29786222 /mnt/home/jatten/pagescorer.jar 2aaaaacc5000-2aaaaacc6000 r-xp 0001a000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc6000-2aaaaacc7000 rwxp 0001b000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc7000-2aaaaacdd000 r-xp 00000000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaacdd000-2aaaaaedc000 ---p 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedc000-2aaaaaedd000 r-xp 00015000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedd000-2aaaaaede000 rwxp 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaede000-2aaaaaee2000 rwxp 2aaaaaede000 00:00 0 2aaaaaee2000-2aaaaaee9000 r-xp 00000000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaaee9000-2aaaaafea000 ---p 00007000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafea000-2aaaaafec000 rwxp 00008000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafec000-2aaaaafee000 r-xp 00000000 08:01 49240 /lib64/libdl-2.7.so 2aaaaafee000-2aaaab1ee000 ---p 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ee000-2aaaab1ef000 r-xp 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ef000-2aaaab1f0000 rwxp 00003000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1f0000-2aaaab1f1000 rwxp 2aaaab1f0000 00:00 0 2aaaab1f1000-2aaaab33e000 r-xp 00000000 08:01 49219 /lib64/libc-2.7.so 2aaaab33e000-2aaaab53e000 ---p 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab53e000-2aaaab542000 r-xp 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab542000-2aaaab543000 rwxp 00151000 08:01 49219 /lib64/libc-2.7.so 2aaaab543000-2aaaab549000 rwxp 2aaaab543000 00:00 0 2aaaab549000-2aaaabca7000 r-xp 00000000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabca7000-2aaaabda6000 ---p 0075e000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabda6000-2aaaabf1e000 rwxp 0075d000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabf1e000-2aaaabf5c000 rwxp 2aaaabf1e000 00:00 0 2aaaabf67000-2aaaabfe9000 r-xp 00000000 08:01 49263 /lib64/libm-2.7.so 2aaaabfe9000-2aaaac1e8000 ---p 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e8000-2aaaac1e9000 r-xp 00081000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e9000-2aaaac1ea000 rwxp 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1ea000-2aaaac1f2000 r-xp 00000000 08:01 49283 /lib64/librt-2.7.so 2aaaac1f2000-2aaaac3f1000 ---p 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f1000-2aaaac3f2000 r-xp 00007000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f2000-2aaaac3f3000 rwxp 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f3000-2aaaac41c000 r-xp 00000000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac41c000-2aaaac51b000 ---p 00029000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac51b000-2aaaac522000 rwxp 00028000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac522000-2aaaac523000 ---p 2aaaac522000 00:00 0 2aaaac523000-2aaaac524000 rwxp 2aaaac523000 00:00 0 2aaaac52d000-2aaaac542000 r-xp 00000000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac542000-2aaaac741000 ---p 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac741000-2aaaac742000 r-xp 00014000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac742000-2aaaac743000 rwxp 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac743000-2aaaac745000 rwxp 2aaaac743000 00:00 0 2aaaac745000-2aaaac74c000 r-xp 00000000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac74c000-2aaaac84d000 ---p 00007000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84d000-2aaaac84f000 rwxp 00008000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84f000-2aaaac850000 rwxp 2aaaac84f000 00:00 0 2aaaac850000-2aaaac858000 rwxs 00000000 08:01 229379 /tmp/hsperfdata_jatten/32116 2aaaac85b000-2aaaac865000 r-xp 00000000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaac865000-2aaaaca64000 ---p 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca64000-2aaaaca65000 r-xp 00009000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca65000-2aaaaca66000 rwxp 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca66000-2aaaaca74000 r-xp 00000000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaaca74000-2aaaacb76000 ---p 0000e000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb76000-2aaaacb79000 rwxp 00010000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb79000-2aaaacdea000 rwxp 2aaaacb79000 00:00 0 2aaaacdea000-2aaaafb7a000 rwxp 2aaaacdea000 00:00 0 2aaaafb7a000-2aaaafb84000 rwxp 2aaaafb7a000 00:00 0 2aaaafb84000-2aaaafc3a000 rwxp 2aaaafb84000 00:00 0 2aaaafc40000-2aaab1100000 rwxp 2aaaafc40000 00:00 0 2aaab1100000-2aaab5040000 rwxp 2aaab1100000 00:00 0 2aaab5040000-2aabc3040000 rwxp 2aaab5040000 00:00 0 2aac025a0000-2aaca8340000 rwxp 2aac025a0000 00:00 0 2aaca8340000-2aaca9040000 rwxp 2aaca8340000 00:00 0 2aaca9040000-2aaca904b000 rwxp 2aaca9040000 00:00 0 2aaca904b000-2aaca906a000 rwxp 2aaca904b000 00:00 0 2aaca906a000-2aaca98da000 rwxp 2aaca906a000 00:00 0 2aaca98da000-2aaca9ad4000 rwxp 2aaca98da000 00:00 0 2aaca9ad4000-2aacaa004000 rwxp 2aaca9ad4000 00:00 0 2aacaa004000-2aacaa00a000 rwxp 2aacaa004000 00:00 0 2aacaa00a000-2aacaa87b000 rwxp 2aacaa00a000 00:00 0 2aacaa87b000-2aacaaa76000 rwxp 2aacaa87b000 00:00 0 2aacaaa76000-2aacaaa81000 rwxp 2aacaaa76000 00:00 0 2aacaaa81000-2aacaaaa0000 rwxp 2aacaaa81000 00:00 0 2aacaaaa0000-2aacaaba0000 rwxp 2aacaaaa0000 00:00 0 2aacaaba0000-2aacaad36000 r-xs 02fb1000 08:01 315318 /usr/java/jdk1.6.0_14/jre/lib/rt.jar 2aacaad36000-2aacaaf36000 rwxp 2aacaad36000 00:00 0 2aacaaf36000-2aacaaf49000 r-xp 00000000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacaaf49000-2aacab04a000 ---p 00013000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab04a000-2aacab04d000 rwxp 00014000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab058000-2aacab05c000 r-xp 00000000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab05c000-2aacab25b000 ---p 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25b000-2aacab25c000 r-xp 00003000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25c000-2aacab25d000 rwxp 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25d000-2aacab26e000 r-xp 00000000 08:01 49282 /lib64/libresolv-2.7.so 2aacab26e000-2aacab46e000 ---p 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46e000-2aacab46f000 r-xp 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46f000-2aacab470000 rwxp 00012000 08:01 49282 /lib64/libresolv-2.7.so 2aacab470000-2aacab572000 rwxp 2aacab470000 00:00 0 2aacab572000-2aacab57e000 r-xs 00081000 08:10 29851828 /mnt/home/jatten/workspace/common/lib/google-collect-1.0.jar 2aacab57e000-2aacab585000 r-xs 000aa000 08:10 29851946 /mnt/home/jatten/workspace/common/lib/mysql-connector-java-5.1.8-bin.jar 2aacab585000-2aacab58d000 r-xs 00028000 08:10 29851949 /mnt/home/jatten/workspace/common/lib/xml-apis.jar 2aacab58d000-2aacab591000 r-xs 0002f000 08:10 29851947 /mnt/home/jatten/workspace/common/lib/commons-beanutils-core-1.8.2.jar 2aacab591000-2aacab59e000 r-xs 0007f000 08:10 29851943 /mnt/home/jatten/workspace/common/lib/commons-collections-3.2.jar 2aacab59e000-2aacab5a3000 r-xs 00026000 08:10 29851942 /mnt/home/jatten/workspace/common/lib/httpcore-4.0.jar 2aacab5a3000-2aacab5a9000 r-xs 00030000 08:10 29851932 /mnt/home/jatten/workspace/common/lib/junit-dep-4.8.1.jar 2aacab5a9000-2aacab5ac000 r-xs 00011000 08:10 29851922 /mnt/home/jatten/workspace/common/lib/servlet.jar 2aacab5ac000-2aacab5ae000 r-xs 00009000 08:10 29851937 /mnt/home/jatten/workspace/common/lib/gsb.jar 2aacab5ae000-2aacab5b5000 r-xs 00059000 08:10 29851930 /mnt/home/jatten/workspace/common/lib/log4j-1.2.15.jar 2aacab5b5000-2aacab6b5000 rwxp 2aacab5b5000 00:00 0 2aacab6b5000-2aacab6b7000 r-xs 00009000 08:10 29851956 /mnt/home/jatten/workspace/common/lib/gsb-src.jar 2aacab6b7000-2aacab7b7000 rwxp 2aacab6b7000 00:00 0 2aacab7b7000-2aacab7cf000 r-xs 00115000 08:10 29851938 /mnt/home/jatten/workspace/common/lib/xercesImpl.jar 2aacab7cf000-2aacab7d1000 r-xs 00009000 08:10 29851957 /mnt/home/jatten/workspace/common/lib/velocity-tools-view-1.0.jar 2aacab7d1000-2aacab7d3000 r-xs 00009000 08:10 29851939 /mnt/home/jatten/workspace/common/lib/commons-cli-1.2.jar 2aacab7d3000-2aacab7d9000 r-xs 00034000 08:10 29851955 /mnt/home/jatten/workspace/common/lib/junit-4.8.1.jar 2aacab7d9000-2aacab7db000 r-xs 0000e000 08:10 29851917 /mnt/home/jatten/workspace/common/lib/jakarta-oro-2.0.8.jar 2aacab7db000-2aacab858000 r-xs 0031d000 08:10 29851916 /mnt/home/jatten/workspace/common/lib/poi-ooxml-schemas-3.6-20091214.jar 2aacab858000-2aacab85c000 r-xs 00028000 08:10 29851936 /mnt/home/jatten/workspace/common/lib/httpcore-nio-4.0.jar 2aacab85c000-2aacab85e000 r-xs 00005000 08:10 29851940 /mnt/home/jatten/workspace/common/lib/commons-beanutils-bean-collections-1.8.2.jar 2aacab85e000-2aacab864000 r-xs 00059000 08:10 29851919 /mnt/home/jatten/workspace/common/lib/mail-1.4.jar 2aacab864000-2aacab866000 r-xs 0000d000 08:10 29851950 /mnt/home/jatten/workspace/common/lib/commons-logging-1.1.1.jar 2aacab866000-2aacab86c000 r-xs 00045000 08:10 29851924 /mnt/home/jatten/workspace/common/lib/commons-httpclient-3.1.jar 2aacab86c000-2aacab877000 r-xs 00074000 08:10 29851931 /mnt/home/jatten/workspace/common/lib/velocity-dep-1.4.jar 2aacab877000-2aacab87f000 r-xs 00051000 08:10 29851954 /mnt/home/jatten/workspace/common/lib/velocity-1.4.jar 2aacab87f000-2aacab884000 r-xs 00034000 08:10 29851958 /mnt/home/jatten/workspace/common/lib/commons-beanutils-1.8.2.jar 2aacab884000-2aacab889000 r-xs 00048000 08:10 29851918 /mnt/home/jatten/workspace/common/lib/dom4j-1.6.1.jar 2aacab889000-2aacab8c6000 r-xs 0024f000 08:10 29851914 /mnt/home/jatten/workspace/common/lib/xmlbeans-2.3.0.jar 2aacab8c6000-2aacab8cb000 r-xs 00033000 08:10 29851929 /mnt/home/jatten/workspace/common/lib/xmemcached-1.2.3.jar 2aacab8cb000-2aacab8cd000 r-xs 00005000 08:10 29851928 /mnt/home/jatten/workspace/common/lib/org.hamcrest.core_1.1.0.v20090501071000.jar 2aacab8cd000-2aacab8d0000 r-xs 0000a000 08:10 29851944 /mnt/home/jatten/workspace/common/lib/persistence-api-1.0.jar 2aacab8d0000-2aacab8d6000 r-xs 0005f000 08:10 29851926 /mnt/home/jatten/workspace/common/lib/poi-ooxml-3.6-20091214.jar 2aacab8d6000-2aacab8d7000 r-xs 0002b000 08:10 29851951 /mnt/home/jatten/workspace/common/lib/maxmind.jar 2aacab8d7000-2aacab8d8000 r-xs 00002000 08:10 29851935 /mnt/home/jatten/workspace/common/lib/jackson-jaxrs-1.2.0.jar 2aacab8d8000-2aacab8d9000 r-xs 00002000 08:10 29851913 /mnt/home/jatten/workspace/common/lib/slf4j-log4j12-1.5.6.jar 2aacab8d9000-2aacab8dd000 r-xs 00025000 08:10 29851945 /mnt/home/jatten/workspace/common/lib/yanf4j-1.1.1.jar 2aacab8dd000-2aacab8df000 r-xs 00003000 08:10 29851952 /mnt/home/jatten/workspace/common/lib/clickstream-1.0.2.jar 2aacab8df000-2aacab8e1000 r-xs 00004000 08:10 29851953 /mnt/home/jatten/workspace/common/lib/slf4j-api-1.5.6.jar 2aacab8e1000-2aacab8e9000 r-xs 0004d000 08:10 29851920 /mnt/home/jatten/workspace/common/lib/jackson-mapper-asl-1.2.0.jar 2aacab8e9000-2aacab8ed000 r-xs 0001f000 08:10 29851925 /mnt/home/jatten/workspace/common/lib/jackson-core-asl-1.2.0.jar 2aacab8ed000-2aacab8f1000 r-xs 0001b000 08:10 29851912 /mnt/home/jatten/workspace/common/lib/oscache-2.3.jar 2aacab8f1000-2aacab90c000 r-xs 0015d000 08:10 29851927 /mnt/home/jatten/workspace/common/lib/poi-3.6-20091214.jar 2aacab90c000-2aacab911000 r-xs 00040000 08:10 29851831 /mnt/home/jatten/workspace/common/lib/commons-lang-2.5.jar 2aacab911000-2aacab914000 r-xs 00012000 08:10 29851923 /mnt/home/jatten/workspace/common/lib/jgooglesafebrowser-0.1a.2.jar 2aacab914000-2aacab918000 r-xs 00023000 08:10 29851933 /mnt/home/jatten/workspace/common/lib/gson-1.3.jar 2aacab918000-2aacabb18000 rwxp 2aacab918000 00:00 0 2aacabb82000-2aacabd82000 rwxp 2aacabb82000 00:00 0 2aacabe05000-2aacaf204000 rwxp 2aacabe05000 00:00 0 7fffaa12a000-7fffaa141000 rwxp 7fffaa12a000 00:00 0 [stack] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vdso] VM Arguments: jvm_args: -Xmx8000M java_command: com.scorers.ModelImplementingPageScorer -t data/data/golds/adult.all.json -b 18 -s data/models/pagetext.binary. adult.april6.all.model -m com.models.MultiClassUpdateableModel -p 30 --goldsilver -v --cat adult --fakeinput -e /mnt/tmp/xyz.15647.pageo bjects.txt -o Launcher Type: SUN_STANDARD Environment Variables: JAVA_HOME=/usr/java/jdk1.6.0_14 PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/jatten/bin LD_LIBRARY_PATH=/usr/java/jdk1.6.0_14/jre/lib/amd64/server:/usr/java/jdk1.6.0_14/jre/lib/amd64:/usr/java/jdk1.6.0_14/jre/../lib/amd64 SHELL=/bin/bash Signal Handlers: SIGSEGV: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGXFSZ: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGILL: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x597480], sa_mask[0]=0x00000000, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:Fedora release 8 (Werewolf) uname:Linux 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 libc:glibc 2.7 NPTL 2.7 rlimit: STACK 10240k, CORE 0k, NPROC 61504, NOFILE 1024, AS infinity load average:2.83 2.73 2.78 CPU:total 2 (4 cores per cpu, 1 threads per core) family 6 model 23 stepping 10, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1 Memory: 4k page, physical 7872040k(14540k free), swap 0k(0k free) vm_info: Java HotSpot(TM) 64-Bit Server VM (14.0-b16) for linux-amd64 JRE (1.6.0_14-b08), built on May 21 2009 01:11:11 by "java_re" with gcc 3.2.2 (SuSE Lin ux) [error occurred during error reporting (printing date and time), id 0xb]

    Read the article

  • How can I resolve Hibernate 3's ConstraintViolationException when updating a Persistent Entity's Col

    - by Tim Visher
    I'm trying to discover why two nearly identical class sets are behaving different from Hibernate 3's perspective. I'm fairly new to Hibernate in general and I'm hoping I'm missing something fairly obvious about the mappings or timing issues or something along those lines but I spent the whole day yesterday staring at the two sets and any differences that would lead to one being able to be persisted and the other not completely escaped me. I appologize in advance for the length of this question but it all hinges around some pretty specific implementation details. I have the following class mapped with Annotations and managed by Hibernate 3.? (if the specific specific version turns out to be pertinent, I'll figure out what it is). Java version is 1.6. ... @Embeddable public class JobStateChange implements Comparable<JobStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = JobState.FIELD_LENGTH) private JobState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; public JobStateChange() { } @Override public int compareTo(final JobStateChange o) { return this.date.compareTo(o.date); } @Override public boolean equals(final Object obj) { if (this == obj) { return true; } else if (!(obj instanceof JobStateChange)) { return false; } JobStateChange candidate = (JobStateChange) obj; return this.state == candidate.state && this.actingUser.equals(candidate.getUser()) && this.date.equals(candidate.getDate()); } @Override public int hashCode() { return this.state.hashCode() + this.actingUser.hashCode() + this.date.hashCode(); } } It is mapped as a Hibernate CollectionOfElements in the class Job as follows: ... @Entity @Table( name = "job", uniqueConstraints = { @UniqueConstraint( columnNames = { "agency", //Job Name "payment_type", //Job Name "payment_file", //Job Name "date_of_payment", "payment_control_number", "truck_number" }) }) public class Job implements Serializable { private static final long serialVersionUID = -1131729422634638834L; ... @org.hibernate.annotations.CollectionOfElements @JoinTable(name = "job_state", joinColumns = @JoinColumn(name = "job_id")) @Sort(type = SortType.NATURAL) private final SortedSet<JobStateChange> stateChanges = new TreeSet<JobStateChange>(); ... public void advanceState( final User actor, final Date date) { JobState nextState; LOGGER.debug("Current state of {} is {}.", this, this.getCurrentState()); if (null == this.currentState) { nextState = JobState.BEGINNING; } else { if (!this.isAdvanceable()) { throw new IllegalAdvancementException(this.currentState.illegalAdvancementStateMessage); } if (this.currentState.isDivergent()) { nextState = this.currentState.getNextState(this); } else { nextState = this.currentState.getNextState(); } } JobStateChange stateChange = new JobStateChange(nextState, actor, date); this.setCurrentState(stateChange.getState()); this.stateChanges.add(stateChange); LOGGER.debug("Advanced {} to {}", this, this.getCurrentState()); } private void setCurrentState(final JobState jobState) { this.currentState = jobState; } boolean isAdvanceable() { return this.getCurrentState().isAdvanceable(this); } ... @Override public boolean equals(final Object obj) { if (obj == this) { return true; } else if (!(obj instanceof Job)) { return false; } Job otherJob = (Job) obj; return this.getName().equals(otherJob.getName()) && this.getDateOfPayment().equals(otherJob.getDateOfPayment()) && this.getPaymentControlNumber().equals(otherJob.getPaymentControlNumber()) && this.getTruckNumber().equals(otherJob.getTruckNumber()); } @Override public int hashCode() { return this.getName().hashCode() + this.getDateOfPayment().hashCode() + this.getPaymentControlNumber().hashCode() + this.getTruckNumber().hashCode(); } ... } The purpose of JobStateChange is to record when the Job moves through a series of State Changes that are outline in JobState as enums which know about advancement and decrement rules. The interface used to advance Jobs through a series of states is to call Job.advanceState() with a Date and a User. If the Job is advanceable according to rules coded in the enum, then a new StateChange is added to the SortedSet and everyone's happy. If not, an IllegalAdvancementException is thrown. The DDL this generates is as follows: ... drop table job; drop table job_state; ... create table job ( id bigint generated by default as identity, current_state varchar(25), date_of_payment date not null, beginningCheckNumber varchar(8) not null, item_count integer, agency varchar(10) not null, payment_file varchar(25) not null, payment_type varchar(25) not null, endingCheckNumber varchar(8) not null, payment_control_number varchar(4) not null, truck_number varchar(255) not null, wrapping_system_type varchar(15) not null, printer_id bigint, primary key (id), unique (agency, payment_type, payment_file, date_of_payment, payment_control_number, truck_number) ); create table job_state ( job_id bigint not null, acting_user_id bigint not null, date timestamp not null, state varchar(25) not null, primary key (job_id, acting_user_id, date, state) ); ... alter table job add constraint FK19BBD12FB9D70 foreign key (printer_id) references printer; alter table job_state add constraint FK57C2418FED1F0D21 foreign key (acting_user_id) references app_user; alter table job_state add constraint FK57C2418FABE090B3 foreign key (job_id) references job; ... The database is seeded with the following data prior to running tests ... insert into job (id, agency, payment_type, payment_file, payment_control_number, date_of_payment, beginningCheckNumber, endingCheckNumber, item_count, current_state, printer_id, wrapping_system_type, truck_number) values (-3, 'RRB', 'Monthly', 'Monthly','4501','1998-12-01 08:31:16' , '00000001','00040000', 40000, 'UNASSIGNED', null, 'KERN', '02'); insert into job_state (job_id, acting_user_id, date, state) values (-3, -1, '1998-11-30 08:31:17', 'UNASSIGNED'); ... After the database schema is automatically generated and rebuilt by the Hibernate tool. The following test runs fine up until the call to Session.flush() ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class JobDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private JobDao jobDao; @Autowired private SessionFactory sessionFactory; @Autowired private UserService userService; @Autowired private PrinterService printerService; ... @Test public void saveJob_JobAdvancedToAssigned_AllExpectedStateChanges() { //Get an unassigned Job Job job = this.jobDao.getJob(-3L); assertEquals(JobState.UNASSIGNED, job.getCurrentState()); Date advancedToUnassigned = new GregorianCalendar(1998, 10, 30, 8, 31, 17).getTime(); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); //Satisfy advancement constraints and advance job.setPrinter(this.printerService.getPrinter(-1L)); Date advancedToAssigned = new Date(); job.advanceState( this.userService.getUserByUsername("admin"), advancedToAssigned); assertEquals(JobState.ASSIGNED, job.getCurrentState()); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); assertEquals(advancedToAssigned, job.getStateChange(JobState.ASSIGNED).getDate()); //Persist to DB this.sessionFactory.getCurrentSession().flush(); ... } ... } The error thrown is SQLCODE=-803, SQLSTATE=23505: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] org.hibernate.exception.ConstraintViolationException: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1416) at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:170) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at jaci.dao.JobDaoIntegrationTest.saveJob_JobAdvancedToAssigned_AllExpectedStateChanges(JobDaoIntegrationTest.java:98) at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160) at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233) at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333) at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217) at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197) at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97) Caused by: com.ibm.db2.jcc.b.lm: DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=1;ACI_APP.JOB_STATE, DRIVER=3.50.152 at com.ibm.db2.jcc.b.wc.a(wc.java:575) at com.ibm.db2.jcc.b.wc.a(wc.java:57) at com.ibm.db2.jcc.b.wc.a(wc.java:126) at com.ibm.db2.jcc.b.tk.b(tk.java:1593) at com.ibm.db2.jcc.b.tk.c(tk.java:1576) at com.ibm.db2.jcc.t4.db.k(db.java:353) at com.ibm.db2.jcc.t4.db.a(db.java:59) at com.ibm.db2.jcc.t4.t.a(t.java:50) at com.ibm.db2.jcc.t4.tb.b(tb.java:200) at com.ibm.db2.jcc.b.uk.Gb(uk.java:2355) at com.ibm.db2.jcc.b.uk.e(uk.java:3129) at com.ibm.db2.jcc.b.uk.zb(uk.java:568) at com.ibm.db2.jcc.b.uk.executeUpdate(uk.java:551) at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1389) Therein lies my problem… A nearly identical Class set (in fact, so identical that I've been chomping at the bit to make it a single class that serves both business entities) runs absolutely fine. It is identical except for name. Instead of Job it's Web. Instead of JobStateChange it's WebStateChange. Instead of JobState it's WebState. Both Job and Web's SortedSet of StateChanges are mapped as a Hibernate CollectionOfElements. Both are @Embeddable. Both are SortType.Natural. Both are backed by an Enumeration with some advancement rules in it. And yet when a nearly identical test is run for Web, no issue is discovered and the data flushes fine. For the sake of brevity I won't include all of the Web classes here, but I will include the test and if anyone wants to see the actual sources, I'll include them (just leave a comment). The data seed: insert into web (id, stock_type, pallet, pallet_id, date_received, first_icn, last_icn, shipment_id, current_state) values (-1, 'PF', '0011', 'A', '2008-12-31 08:30:02', '000000001', '000080000', -1, 'UNSTAGED'); insert into web_state (web_id, date, state, acting_user_id) values (-1, '2008-12-31 08:30:03', 'UNSTAGED', -1); The test: ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class WebDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private WebDao webDao; @Autowired private UserService userService; @Autowired private SessionFactory sessionFactory; ... @Test public void saveWeb_WebAdvancedToNewState_AllExpectedStateChanges() { Web web = this.webDao.getWeb(-1L); Date advancedToUnstaged = new GregorianCalendar(2008, 11, 31, 8, 30, 3).getTime(); assertEquals(WebState.UNSTAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); Date advancedToStaged = new Date(); web.advanceState( this.userService.getUserByUsername("admin"), advancedToStaged); this.sessionFactory.getCurrentSession().flush(); web = this.webDao.getWeb(web.getId()); assertEquals( "Web should have moved to STAGED State.", WebState.STAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); assertEquals(advancedToStaged, web.getState(WebState.STAGED).getDate()); assertNotNull(web.getState(WebState.UNSTAGED)); assertNotNull(web.getState(WebState.STAGED)); } ... } As you can see, I assert that the Web was reconstituted the way I expect, I advance it, flush it to the DB, and then re-get it and verify that the states are as I expect. Everything works perfectly. Not so with Job. A possibly pertinent detail: the reconstitution code works fine if I cease to map JobStateChange.data as a TIMESTAMP and instead as a DATE, and ensure that all of the StateChanges always occur on different Dates. The problem is that this particular business entity can go through many state changes in a single day and so it needs to be sorted by time stamp rather than by date. If I don't do this then I can't sort the StateChanges correctly. That being said, WebStateChange.date is also mapped as a TIMESTAMP and so I again remain absolutely befuddled as to where this error is arising from. I tried to do a fairly thorough job of giving all of the technical details of the implementation but as this particular question is very implementation specific, if I missed anything just let me know in the comments and I'll include it. Thanks so much for your help! UPDATE: Since it turns out to be important to the solution of my problem, I have to include the pertinent bits of the WebStateChange class as well. ... @Embeddable public class WebStateChange implements Comparable<WebStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = WebState.FIELD_LENGTH) private WebState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; ... WebStateChange( final WebState state, final User actingUser, final Date date) { ExceptionUtils.illegalNullArgs(state, actingUser, date); this.state = state; this.actingUser = actingUser; this.date = new Date(date.getTime()); } @Override public int compareTo(final WebStateChange otherStateChange) { return this.date.compareTo(otherStateChange.date); } @Override public boolean equals(final Object candidate) { if (this == candidate) { return true; } else if (!(candidate instanceof WebStateChange)) { return false; } WebStateChange candidateWebState = (WebStateChange) candidate; return this.getState() == candidateWebState.getState() && this.getUser().equals(candidateWebState.getUser()) && this.getDate().equals(candidateWebState.getDate()); } @Override public int hashCode() { return this.getState().hashCode() + this.getUser().hashCode() + this.getDate().hashCode(); } ... }

    Read the article

  • OpenGL 3.x Assimp trouble implementing phong shading (normals?)

    - by Defcronyke
    I'm having trouble getting phong shading to look right. I'm pretty sure there's something wrong with either my OpenGL calls, or the way I'm loading my normals, but I guess it could be something else since 3D graphics and Assimp are both still very new to me. When trying to load .obj/.mtl files, the problems I'm seeing are: The models seem to be lit too intensely (less phong-style and more completely washed out, too bright). Faces that are lit seem to be lit equally all over (with the exception of a specular highlight showing only when the light source position is moved to be practically right on top of the model) Because of problems 1 and 2, spheres look very wrong: picture of sphere And things with larger faces look (less-noticeably) wrong too: picture of cube I could be wrong, but to me this doesn't look like proper phong shading. Here's the code that I think might be relevant (I can post more if necessary): file: assimpRenderer.cpp #include "assimpRenderer.hpp" namespace def { assimpRenderer::assimpRenderer(std::string modelFilename, float modelScale) { initSFML(); initOpenGL(); if (assImport(modelFilename)) // if modelFile loaded successfully { initScene(); mainLoop(modelScale); shutdownScene(); } shutdownOpenGL(); shutdownSFML(); } assimpRenderer::~assimpRenderer() { } void assimpRenderer::initSFML() { windowWidth = 800; windowHeight = 600; settings.majorVersion = 3; settings.minorVersion = 3; app = NULL; shader = NULL; app = new sf::Window(sf::VideoMode(windowWidth,windowHeight,32), "OpenGL 3.x Window", sf::Style::Default, settings); app->setFramerateLimit(240); app->setActive(); return; } void assimpRenderer::shutdownSFML() { delete app; return; } void assimpRenderer::initOpenGL() { GLenum err = glewInit(); if (GLEW_OK != err) { /* Problem: glewInit failed, something is seriously wrong. */ std::cerr << "Error: " << glewGetErrorString(err) << std::endl; } // check the OpenGL context version that's currently in use int glVersion[2] = {-1, -1}; glGetIntegerv(GL_MAJOR_VERSION, &glVersion[0]); // get the OpenGL Major version glGetIntegerv(GL_MINOR_VERSION, &glVersion[1]); // get the OpenGL Minor version std::cout << "Using OpenGL Version: " << glVersion[0] << "." << glVersion[1] << std::endl; return; } void assimpRenderer::shutdownOpenGL() { return; } void assimpRenderer::initScene() { // allocate heap space for VAOs, VBOs, and IBOs vaoID = new GLuint[scene->mNumMeshes]; vboID = new GLuint[scene->mNumMeshes*2]; iboID = new GLuint[scene->mNumMeshes]; glClearColor(0.4f, 0.6f, 0.9f, 0.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glEnable(GL_CULL_FACE); shader = new Shader("shader.vert", "shader.frag"); projectionMatrix = glm::perspective(60.0f, (float)windowWidth / (float)windowHeight, 0.1f, 100.0f); rot = 0.0f; rotSpeed = 50.0f; faceIndex = 0; colorArrayA = NULL; colorArrayD = NULL; colorArrayS = NULL; normalArray = NULL; genVAOs(); return; } void assimpRenderer::shutdownScene() { delete [] iboID; delete [] vboID; delete [] vaoID; delete shader; } void assimpRenderer::renderScene(float modelScale) { sf::Time elapsedTime = clock.getElapsedTime(); clock.restart(); if (rot > 360.0f) rot = 0.0f; rot += rotSpeed * elapsedTime.asSeconds(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, -3.0f, -10.0f)); // move back a bit modelMatrix = glm::scale(glm::mat4(1.0f), glm::vec3(modelScale)); // scale model modelMatrix = glm::rotate(modelMatrix, rot, glm::vec3(0, 1, 0)); //modelMatrix = glm::rotate(modelMatrix, 25.0f, glm::vec3(0, 1, 0)); glm::vec3 lightPosition( 0.0f, -100.0f, 0.0f ); float lightPositionArray[3]; lightPositionArray[0] = lightPosition[0]; lightPositionArray[1] = lightPosition[1]; lightPositionArray[2] = lightPosition[2]; shader->bind(); int projectionMatrixLocation = glGetUniformLocation(shader->id(), "projectionMatrix"); int viewMatrixLocation = glGetUniformLocation(shader->id(), "viewMatrix"); int modelMatrixLocation = glGetUniformLocation(shader->id(), "modelMatrix"); int ambientLocation = glGetUniformLocation(shader->id(), "ambientColor"); int diffuseLocation = glGetUniformLocation(shader->id(), "diffuseColor"); int specularLocation = glGetUniformLocation(shader->id(), "specularColor"); int lightPositionLocation = glGetUniformLocation(shader->id(), "lightPosition"); int normalMatrixLocation = glGetUniformLocation(shader->id(), "normalMatrix"); glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]); glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); glUniform3fv(lightPositionLocation, 1, lightPositionArray); for (unsigned int i = 0; i < scene->mNumMeshes; i++) { colorArrayA = new float[3]; colorArrayD = new float[3]; colorArrayS = new float[3]; material = scene->mMaterials[scene->mNumMaterials-1]; normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glUniformMatrix3fv(normalMatrixLocation, 1, GL_FALSE, normalArray); aiColor3D ambient(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_AMBIENT, ambient); aiColor3D diffuse(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_DIFFUSE, diffuse); aiColor3D specular(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_SPECULAR, specular); colorArrayA[0] = ambient.r; colorArrayA[1] = ambient.g; colorArrayA[2] = ambient.b; colorArrayD[0] = diffuse.r; colorArrayD[1] = diffuse.g; colorArrayD[2] = diffuse.b; colorArrayS[0] = specular.r; colorArrayS[1] = specular.g; colorArrayS[2] = specular.b; // bind color for each mesh glUniform3fv(ambientLocation, 1, colorArrayA); glUniform3fv(diffuseLocation, 1, colorArrayD); glUniform3fv(specularLocation, 1, colorArrayS); // render all meshes glBindVertexArray(vaoID[i]); // bind our VAO glDrawElements(GL_TRIANGLES, scene->mMeshes[i]->mNumFaces*3, GL_UNSIGNED_INT, 0); glBindVertexArray(0); // unbind our VAO delete [] normalArray; delete [] colorArrayA; delete [] colorArrayD; delete [] colorArrayS; } shader->unbind(); app->display(); return; } void assimpRenderer::handleEvents() { sf::Event event; while (app->pollEvent(event)) { if (event.type == sf::Event::Closed) { app->close(); } if ((event.type == sf::Event::KeyPressed) && (event.key.code == sf::Keyboard::Escape)) { app->close(); } if (event.type == sf::Event::Resized) { glViewport(0, 0, event.size.width, event.size.height); } } return; } void assimpRenderer::mainLoop(float modelScale) { while (app->isOpen()) { renderScene(modelScale); handleEvents(); } } bool assimpRenderer::assImport(const std::string& pFile) { // read the file with some example postprocessing scene = importer.ReadFile(pFile, aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType); // if the import failed, report it if (!scene) { std::cerr << "Error: " << importer.GetErrorString() << std::endl; return false; } return true; } void assimpRenderer::genVAOs() { int vboIndex = 0; for (unsigned int i = 0; i < scene->mNumMeshes; i++, vboIndex+=2) { mesh = scene->mMeshes[i]; indexArray = new unsigned int[mesh->mNumFaces * sizeof(unsigned int) * 3]; // convert assimp faces format to array faceIndex = 0; for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; std::memcpy(&indexArray[faceIndex], face->mIndices, sizeof(float) * 3); faceIndex += 3; } // generate VAO glGenVertexArrays(1, &vaoID[i]); glBindVertexArray(vaoID[i]); // generate IBO for faces glGenBuffers(1, &iboID[i]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * mesh->mNumFaces * 3, indexArray, GL_STATIC_DRAW); // generate VBO for vertices if (mesh->HasPositions()) { glGenBuffers(1, &vboID[vboIndex]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, mesh->mVertices, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)0); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); } // generate VBO for normals if (mesh->HasNormals()) { normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glGenBuffers(1, &vboID[vboIndex+1]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex+1]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, normalArray, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)1); glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0); delete [] normalArray; } // tex coord stuff goes here // unbind buffers glBindVertexArray(0); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); delete [] indexArray; } vboIndex = 0; return; } } file: shader.vert #version 150 core in vec3 in_Position; in vec3 in_Normal; uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; uniform vec3 lightPosition; uniform mat3 normalMatrix; smooth out vec3 vVaryingNormal; smooth out vec3 vVaryingLightDir; void main() { // derive MVP and MV matrices mat4 modelViewProjectionMatrix = projectionMatrix * viewMatrix * modelMatrix; mat4 modelViewMatrix = viewMatrix * modelMatrix; // get surface normal in eye coordinates vVaryingNormal = normalMatrix * in_Normal; // get vertex position in eye coordinates vec4 vPosition4 = modelViewMatrix * vec4(in_Position, 1.0); vec3 vPosition3 = vPosition4.xyz / vPosition4.w; // get vector to light source vVaryingLightDir = normalize(lightPosition - vPosition3); // Set the position of the current vertex gl_Position = modelViewProjectionMatrix * vec4(in_Position, 1.0); } file: shader.frag #version 150 core out vec4 out_Color; uniform vec3 ambientColor; uniform vec3 diffuseColor; uniform vec3 specularColor; smooth in vec3 vVaryingNormal; smooth in vec3 vVaryingLightDir; void main() { // dot product gives us diffuse intensity float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir))); // multiply intensity by diffuse color, force alpha to 1.0 out_Color = vec4(diff * diffuseColor, 1.0); // add in ambient light out_Color += vec4(ambientColor, 1.0); // specular light vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir), normalize(vVaryingNormal))); float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection)); if (diff != 0) { float fSpec = pow(spec, 128.0); // Set the output color of our current pixel out_Color.rgb += vec3(fSpec, fSpec, fSpec); } } I know it's a lot to look through, but I'm putting most of the code up so as not to assume where the problem is. Thanks in advance to anyone who has some time to help me pinpoint the problem(s)! I've been trying to sort it out for two days now and I'm not getting anywhere on my own.

    Read the article

  • my asp.net mvc 2.0 application fails with error "No parameterless constructor defined for this objec

    - by loviji
    Hello, i'm new in asp.net mvc 2. I'm trying to list all data from one table(ms sql server table). as ORM I use Entity Framework. now, I'm tried to write something to do this: Model: private uqsEntities _uqsEntity; public permissionRepository(uqsEntities uqsEntity) { _uqsEntity = uqsEntity; } public IEnumerable<userPermissions> getAllData() { return _uqsEntity.userPermissions; } controller: private DataManager _dataManager; public ManagePermissionsController(DataManager datamanager) { _dataManager = datamanager; } public ActionResult Index() { return RedirectToAction("List"); } [AcceptVerbs(HttpVerbs.Get)] public ActionResult List() { return List(null); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult List(int? userID) { return View(_dataManager.Permission.getAllData().ToList()); } Route: routes.MapRoute( "ManagePerm", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "ManagePermissions", action = "Index"}, new string[] { "uqs.Controllers" } // Parameter defaults ); and View automatically generated by Visual Studio(in action mouse right-click). when I run app. , my app. fails. No parameterless constructor defined for this object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: No parameterless constructor defined for this object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: No parameterless constructor defined for this object.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +80 [InvalidOperationException: An error occurred when trying to create a controller of type 'uqs.Controllers.ManagePermissionsController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +190 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679186 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 please, somebody, help me to catch problem.

    Read the article

< Previous Page | 949 950 951 952 953 954 955 956 957 958 959 960  | Next Page >