Search Results

Search found 3936 results on 158 pages for 'sun java6 jdk'.

Page 107/158 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • java.lang.ClassCastException: $Proxy99 cannot be cast

    - by svaret
    Hi, I am using JBoss4.2.2 and java6. The deployed ear's name is apa.ear In a servlet I have the following code line: placeBid = (PlaceBid) context.lookup("apa/" + PlaceBid.class.getSimpleName() + "/remote"); I have a generated jboss-app.xml like this: <jboss-app> <loader-repository>apa:app=ejb3</loader-repository> </jboss-app> When trying to get the PlaceBid via the context I get this exception java.lang.ClassCastException: $Proxy99 cannot be cast to se.nextit.actionbazaar.buslogic.PlaceBid The PlaceBid interface looks like this: @Remote public interface PlaceBid { Long addBid(String userId, Long itemId, Double bidPrice); } When I run the example coming with EJB3 in action it works. EJB3 in action sample code comes with ant building. I want to use Maven so I have rearranged the code some. However, I don't understan what I am doing wrong here. I have some thoughts about the jboss-app.xml file. I am not sure of how its content should look like. Grateful for any help. Best wishes Lasse

    Read the article

  • jruby rubygems update breaks jgem

    - by brad
    Has anyone seen this: ?? No jgem command works at all?? Though jruby -S gem list does work. I'm using jruby 1.3.1 and Sun Java6 jre root@test:/usr/local: jgem --version 1.3.3 root@test:/usr/local: jgem update --system JRuby limited openssl loaded. gem install jruby-openssl for full support. http://wiki.jruby.org/wiki/JRuby_Builtin_OpenSSL Updating RubyGems Updating rubygems-update Successfully installed rubygems-update-1.3.6 /usr/local/jruby/lib/ruby/site_ruby/1.8/rubygems/commands/update_command.rb:103:Warning: Gem::SourceIndex#search support for String patterns is deprecated Updating RubyGems to 1.3.6 Installing RubyGems 1.3.6 RubyGems 1.3.6 installed root@test:/usr/local: jgem list /usr/local/jruby/bin/jgem: line 8: require: command not found /usr/local/jruby/bin/jgem: line 9: require: command not found /usr/local/jruby/bin/jgem: line 10: require: command not found /usr/local/jruby/bin/jgem: line 12: required_version: command not found /usr/local/jruby/bin/jgem: line 14: unless: command not found /usr/local/jruby/bin/jgem: line 15: abort: command not found /usr/local/jruby/bin/jgem: line 16: end: command not found /usr/local/jruby/bin/jgem: line 18: args: command not found /usr/local/jruby/bin/jgem: line 20: begin: command not found /usr/local/jruby/bin/jgem: line 21: Gem::GemRunner.new.run: command not found /usr/local/jruby/bin/jgem: line 22: rescue: command not found /usr/local/jruby/bin/jgem: line 23: exit: e.exit_code: numeric argument required

    Read the article

  • Java reading xml element without prefix but within the scope of a namespace

    - by wsxedc
    Functionally, the two blocks should be the same <soapenv:Body> <ns1:login xmlns:ns1="urn:soap.sof.com"> <userInfo> <username>superuser</username> <password>qapass</password> </userInfo> </ns1:login> </soapenv:Body> ----------------------- <soapenv:Body> <ns1:login xmlns:ns1="urn:soap.sof.com"> <ns1:userInfo> <ns1:username>superuser</ns1:username> <ns1:password>qapass</ns1:password> </ns1:userInfo> </ns1:login> </soapenv:Body> However, how when I read using AXIS2 and I have tested it with java6 as well, I am having a problem. MessageFactory factory = MessageFactory.newInstance(); SOAPMessage soapMsg = factory.createMessage(new MimeHeaders(), SimpleTest.class.getResourceAsStream("LoginSoap.xml")); SOAPBody body = soapMsg.getSOAPBody(); NodeList nodeList = body.getElementsByTagNameNS("urn:soap.sof.com", "login"); System.out.println("Try to get login element" + nodeList.getLength()); // I can get the login element Node item = nodeList.item(0); NodeList elementsByTagNameNS = ((Element)item).getElementsByTagNameNS("urn:soap.sof.com", "username"); System.out.println("try to get username element " + elementsByTagNameNS.getLength()); So if I replace the 2nd getElementsByTagNameNS with ((Element)item).getElementsByTagName("username");, I am able to get the username element. Doesn't username have ns1 namespace even though it doesn't have the prefix? Am I suppose to keep track of the namespace scope to read an element? Wouldn't it became nasty if my xml elements are many level deep? Is there a workaround where I can read the element in ns1 namespace without knowing whether a prefix is defined?

    Read the article

  • Axis Fault - axis (401)Unauthorized

    - by jani
    Hi all, I am trying to create a simple axis web service. I am using axis 1.2.1, JDK 6, Weblogic. Everything seems to be fine except invoking the web service. When I try to invoke the service it gives me an 'Unautherized' error. Any ideas of what am I doing wrong? Thanks in advance AxisFault faultCode: {http://xml.apache.org/axis/}HTTP faultSubcode: faultString: (401)Unauthorized faultActor: faultNode: faultDetail: {}:return code: 401 {http://xml.apache.org/axis/}HttpErrorCode:401 (401)Unauthorized at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:744) at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165) at org.apache.axis.client.Call.invokeEngine(Call.java:2765) at org.apache.axis.client.Call.invoke(Call.java:2748) at org.apache.axis.client.Call.invoke(Call.java:2424) at org.apache.axis.client.Call.invoke(Call.java:2347) at org.apache.axis.client.Call.invoke(Call.java:1804)

    Read the article

  • Error adding certificate to cacerts. Unknown key spec

    - by Alvaro Villanueva
    I am using jdk 1.6 in Windows. I have a .der file (DER Encoded X509 Certificate) that will like to add to my cacerts file... so I tried the following: keytool -import -keystore "C:\Program Files\Java\jdk1.6.0_27\jre\lib\security\cacerts" -trustcacerts -alias openldap -file "C:\cacert.der" I got the following error: java.security.cert.CertificateParsingException: java.io.IOException: subject key, java.security.spec.InvalidKeySpecException: Unknown key spec At first, I thoght it was a problemen with the der certificate, but then doing the following I got exactly the same error: keytool -list -keystore "C:\Program Files\Java\jdk1.6.0_27\jre\lib\security\cacerts" Any ideas why is this problem appearing? I have not found anything in the Web. Thanks in advance.

    Read the article

  • Persistence provider caller does not implement the EJB3 spec

    - by Joshua
    WARN [Ejb3Configuration] Persistence provider caller does not implement the EJB3 spec correctly. PersistenceUnitInfo.getNewTempClassLoad er() is null. How do you get rid of the above warning? 0:42:08,032 INFO [PersistenceUnitDeployment] Starting persistence unit pe rsistence.unit:unitName=k12-ear.ear/k12-ejb-1.0.0.jar#k12 10:42:08,371 INFO [Version] Hibernate Annotations 3.4.0.GA 10:42:08,442 INFO [Environment] Hibernate 3.3.1.GA 10:42:08,450 INFO [Environment] hibernate.properties not found 10:42:08,486 INFO [Environment] Bytecode provider name : javassist 10:42:08,492 INFO [Environment] using JDK 1.4 java.sql.Timestamp handling 10:42:08,754 INFO [Version] Hibernate Commons Annotations 3.1.0.GA 10:42:08,989 INFO [Version] Hibernate EntityManager 3.4.0.GA 10:42:09,211 INFO [Ejb3Configuration] Processing PersistenceUnitInfo [ name: k12 ...] 10:42:09,458 WARN [Ejb3Configuration] Persistence provider caller does not implement the EJB3 spec correctly. PersistenceUnitInfo.getNewTempClassLoad er() is null. 10:42:09,620 WARN [Ejb3Configuration] Defining hibernate.transaction.flush _before_completion=true ignored in HEM 10:42:09,745 DEBUG [AnnotationConfiguration] Execute first pass mapping pro cessing

    Read the article

  • Google App Engine with Java - Error running javac.exe compiler

    - by dta
    On Windows XP Just downloaed and unzipped google app engine java sdk to C:\Program Files\appengine-java-sdk I have jdk installed in C:\Program Files\Java\jdk1.6.0_20. I ran the sample application by appengine-java-sdk\bin\dev_appserver.cmd appengine-java-sdk\demos\guestbook\war Then I visited localhost:8080 to find : HTTP ERROR 500 Problem accessing /. Reason: Error running javac.exe compiler Caused by: Error running javac.exe compiler at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:473) How to Fix it? My JAVA_HOME points to C:\Program Files\Java\jdk1.6.0_20. I also tried chaning my appcfg.cmd to : @"C:\Program Files\Java\jdk1.6.0_20\bin\java" -cp "%~dp0..\lib\appengine-tools-api.jar" com.google.appengine.tools.admin.AppCfg %* It too didn't work.

    Read the article

  • java.lang.ClassCastException: org.postgresql.jdbc4.Jdbc4Connection cannot be cast to org.postgresql.jdbc4.Jdbc4Connection

    - by ???????? ??????
    I want to get PGConnection from posgresql connection in JBOSS AS7 (Data source postgresql-9.0-801.jdbc4.jar) I've got cast exception when used (WrappedConnection)connection So now I use reflection(JDK 1.7): private static PGConnection getPGConnection(Connection connection) throws SQLException { if(connection instanceof PGConnection) { return (PGConnection)connection; } try { Class[] parms = null; Method method =(connection.getClass()).getMethod("getUnderlyingConnection", parms); return (PGConnection) jdbc4Conn; } catch ... and catch exception java.lang.ClassCastException: org.postgresql.jdbc4.Jdbc4Connection cannot be cast to org.postgresql.jdbc4.Jdbc4Connection It is the same class!!! Haw could it be?

    Read the article

  • It won't create a Java VM (JNI)

    - by Michael Bruckmeier
    My simple command line app: int _tmain(int argc, _TCHAR* argv[]) { JavaVM *jvm; JNIEnv *env; JavaVMInitArgs vm_args; JavaVMOption options[1]; options[0].optionString = "-Djava.class.path=."; //Path to the java source code vm_args.version = JNI_VERSION_1_6; //JDK version. This indicates version 1.6 vm_args.nOptions = 1; vm_args.options = options; vm_args.ignoreUnrecognized = 0; jint ret = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args); return 0; } gives me: Error occurred during initialization of VM Unable to load native library: Can't find dependent libraries The breakpoint at "return 0" is never reached. jvm.dll resides in same directory as my command line app. I don't get it what's wrong. Any Ideas? Thanx in advance

    Read the article

  • Chock-full of Identity Customers at Oracle OpenWorld

    - by Tanu Sood
      Oracle Openworld (OOW) 2012 kicks off this coming Sunday. Oracle OpenWorld is known to bring in Oracle customers, organizations big and small, from all over the world. And, Identity Management is no exception. If you are looking to catch up with Oracle Identity Management customers, hear first-hand about their implementation experiences and discuss industry trends, business drivers, solutions and more at OOW, here are some sessions we recommend you attend: Monday, October 1, 2012 CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Subject matter experts from Kaiser Permanente and SuperValu share the stage with Amit Jasuja, Snior Vice President, Oracle Identity Management and Security to discuss how the latest advances in Identity Management are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. Tuesday, October 2, 2012 CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 With more than 5 billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 As organizations seek to encourage more and more user self service, business users are now primary end users for identity management installations.  Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. Wednesday, October 3, 2012 CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? How do you externalize such entitlements to allow dynamic changes without having to touch the application code? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. Thursday, October 4, 2012 CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. And here’s the complete listing of all the Identity Management sessions at Oracle OpenWorld.

    Read the article

  • Heap Dump Root Classes

    - by Adnan Memon
    We have production system going into infinite loop of full gc and memory drops form 8 gigs to like 1 MB in just 2 minutes. After taking heap dump it tells me there an is an array of java.lang.Object ([Ljava.lang.Object) with millions of java.lang.String objects having same String taking 99% of heap. But it doesn't tell me which class is referencing to this array so that I can fix it in the code. I took the heap dump using jmap tool on JDK 6 and used JProfiler, NetBeans, SAP Memory Analyzer and IBM Memory Analyzer but none of those tell me what is causing this huge array of objects?? ... like what class is referencing to it or contains it. Do I have to take a different dump with different config in order to get that info? ... Or anything else that can help me find out the culprit class causing this ... it will help a lot.

    Read the article

  • Exception while running Quartz Schdular program

    - by Sunny Mate
    hi, i am getting he following Exception while running my Quartz Schdular program. Below is the exception Trace Mar 26, 2010 2:54:24 PM org.quartz.core.QuartzScheduler start INFO: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. Exception in thread "main" java.lang.IllegalArgumentException: Job class must implement the Job interface. at org.quartz.JobDetail.setJobClass(JobDetail.java:291) at org.quartz.JobDetail.(JobDetail.java:138) at com.Quarrtz.RanchSchedule.main(RanchSchedule.java:18) i have included Quartz-1.7.2.jar and Quartz-all-1.7.2.jar in my class path along with commom-logging 1.1.jar and jdk 6 this is an example i have copy and pasted from JAVA RANCH http://www.javaranch.com/journal/200711/combining_spring_and_quartz.html First example in the above page any help pls thanx in advance Sunny Mate

    Read the article

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • SublimeJava won't react at all on Mac OS X 10.7

    - by David Merz
    today I tried to install and run the SublimeJava Plugin for Sublime Text 2. Here is basically what i've done. Cloning the git Repository https://github.com/quarnster/SublimeJava.git into ~/Library/Application Support/Sublime Text 2/Packages. Created a ProjectFile to test the Plugin. { "folders": [ { // The class files are in the same directory "path": "~/src/path_to_project/" } ], "settings": [ { "sublimejava_classpath": [ "~/src/path_to_project/", "/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Libraries/" ], "sublimejava_enabled":true } ] } Now whenever I type something that should trigger the code-completion, nothing happens. I hope you guys can sort me out here, many thanks in advance!!

    Read the article

  • Container Options in AWS Elastic Beanstalk

    - by Sangram Anand
    We have deployed a java webapplication in Elastic Beanstalk with the minimum instance count 1 and max instance count 2 for Autoscaling. The custom AMI we are using is c1.medium with Sun JDK 6. The environment status changed to yellow and then red. After checking into the log file from the snapshot logs we found a exception - Caused by: java.lang.OutOfMemoryError: Java heap space. Assuming this could be one of the possible reason for the Environment failure. The settings that we have configured in the Environment Container option are Initial JVM Heap Size (MB) - 256M Maximum JVM Heap Size (MB) - 512m The maximum heap size the java virtual machine will ever consume, specified on the JVM launch command line using -Xmx. Maximum JVM Permanent Generation Size (MB) - 512m Should i increase the Heap size from 512m to more or is it fine.

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • Joda-Time: DateTime, DateMidnight and LocalDate usage

    - by fraido
    Joda-Time library includes different datetime classes DateTime - Immutable replacement for JDK Calendar DateMidnight - Immutable class representing a date where the time is forced to midnight LocalDateTime - Immutable class representing a local date and time (no time zone) I'm wondering how are you using these classes in your Layered Applications. I see advantages in having almost all the Interfaces using LocalDateTime (at the Service Layer at least) so that my Application doesn't have to manage Timezones and can safely assume Times always in UTC. My app could then use DateTime to manage Timezones at the very beginning of the Execution's Flow. I'm also wondering in which scenario can DateMidnight be useful.

    Read the article

  • Blackberry Eclispe plugin and emulator

    - by gmcalab
    So I installed the following installation packages to develop Blackberry apps using the included emulators. I first installed them on a macbook pro, virtualizing windows 7 x86 with vmware. Everything worked fine, I created a quick HelloWorld app and it compiled and fully ran in the emulator. I did no other configuration. So I went to install this on my desktop PC with Windows 7 x64. I installed the exact same items. When I choose to run with the Blackberry Emulator, nothing happens. Any ideas? Here's the file list: BlackBerry_JDE_PluginFull_1.0.0.67 (This includes Eclispe) jdk-6u18-windows-i586

    Read the article

  • setProperty must be overridden by all subclasses of SOAPMessage

    - by Pablo
    I'm trying to deploy some web services in a WAR application on JBoss 5.1.0. I have created the source files from an existing wsdl using JAX-WS tool wsgen. This created the Service files and @XmlType annotated clases that would act as request and response wrappers. This classes worked well on JBoss 4.2.3, but when moving to JBoss 5.1.0, I get this exception. java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage My configuration: Windows XP SP3 (but getting the same on Vista, as well as on Linux) Sun JDK 1.6.0_17 JBoss 5.1.0 GA for jdk6 Thanks in advance!

    Read the article

  • Java2D OpenGL Hardware Acceleration Doesn't Work

    - by Aaron
    It doesn't work with OpenGL with even the simplest of programs. Here is what I am doing.. java -Dsun.java2d.opengl=True -jar Java2Demo.jar (Java2Demo.jar is usually included with the JDK..) The text output is: OpenGL pipeline enabled for default config on screen 0 When I don't pass in the above VM argument things work fine (but slowly). When I do pass in the above argument nothing shows up... If I move the window around it captures whatever image it was on top of and jumbles it into nonsense. I'm running Windows XP Pro SP3 (Microsoft Windows XP [Version 5.1.2600]) (under Parallels on OS X 10.5.8) I used "Geeks3D GPU Caps Viewer" to tell me I have Open GL version: 2.0 NVIDIA-1.5.48 I have tried this with two version of the JVM. First: java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03) Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode) and second: java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • Techniques for modeling a dynamic dataflow with Java concurrency API

    - by Maian
    Is there an elegant way to model a dynamic dataflow in Java? By dataflow, I mean there are various types of tasks, and these tasks can be "connected" arbitrarily, such that when a task finishes, successor tasks are executed in parallel using the finished tasks output as input, or when multiple tasks finish, their output is aggregated in a successor task (see flow-based programming). By dynamic, I mean that the type and number of successors tasks when a task finishes depends on the output of that finished task, so for example, task A may spawn task B if it has a certain output, but may spawn task C if has a different output. Another way of putting it is that each task (or set of tasks) is responsible for determining what the next tasks are. Sample dataflow for rendering a webpage: I have as task types: file downloader, HTML/CSS renderer, HTML parser/DOM builder, image renderer, JavaScript parser, JavaScript interpreter. File downloader task for HTML file HTML parser/DOM builder task File downloader task for each embedded file/link If image, image renderer If external JavaScript, JavaScript parser JavaScript interpreter Otherwise, just store in some var/field in HTML parser task JavaScript parser for each embedded script JavaScript interpreter Wait for above tasks to finish, then HTML/CSS renderer (obviously not optimal or perfectly correct, but this is simple) I'm not saying the solution needs to be some comprehensive framework (in fact, the closer to the JDK API, the better), and I absolutely don't want something as heavyweight is say Spring Web Flow or some declarative markup or other DSL. To be more specific, I'm trying to think of a good way to model this in Java with Callables, Executors, ExecutorCompletionServices, and perhaps various synchronizer classes (like Semaphore or CountDownLatch). There are a couple use cases and requirements: Don't make any assumptions on what executor(s) the tasks will run on. In fact, to simplify, just assume there's only one executor. It can be a fixed thread pool executor, so a naive implementation can result in deadlocks (e.g. imagine a task that submits another task and then blocks until that subtask is finished, and now imagine several of these tasks using up all the threads). To simplify, assume that the data is not streamed between tasks (task output-succeeding task input) - the finishing task and succeeding task won't exist together, so the input data to the succeeding task will not be changed by the preceeding task (since it's already done). There are only a couple operations that the dataflow "engine" should be able to handle: A mechanism where a task can queue more tasks A mechanism whereby a successor task is not queued until all the required input tasks are finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until the flow is finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until certain tasks have finished Since the dataflow is dynamic (depends on input/state of the task), the activation of these mechanisms should occur within the task code, e.g. the code in a Callable is itself responsible for queueing more Callables. The dataflow "internals" should not be exposed to the tasks (Callables) themselves - only the operations listed above should be available to the task. Note that the type of the data is not necessarily the same for all tasks, e.g. a file download task may accept a File as input but will output a String. If a task throws an uncaught exception (indicating some fatal error requiring all dataflow processing to stop), it must propagate up to the thread that initiated the dataflow as quickly as possible and cancel all tasks (or something fancier like a fatal error handler). Tasks should be launched as soon as possible. This along with the previous requirement should preclude simple Future polling + Thread.sleep(). As a bonus, I would like to dataflow engine itself to perform some action (like logging) every time task is finished or when no has finished in X time since last task has finished. Something like: ExecutorCompletionService<T> ecs; while (hasTasks()) { Future<T> future = ecs.poll(1 minute); some_action_like_logging(); if (future != null) { future.get() ... } ... } Are there straightforward ways to do all this with Java concurrency API? Or if it's going to complex no matter what with what's available in the JDK, is there a lightweight library that satisfies the requirements? I already have a partial solution that fits my particular use case (it cheats in a way, since I'm using two executors, and just so you know, it's not related at all to the web browser example I gave above), but I'd like to see a more general purpose and elegant solution.

    Read the article

  • Problems with installing jcc and pylucene

    - by Christian
    I'm trying to install pylucene on Windows XP. I installed JDK on C:\Programme\Java\jdk1.6.0_18 . I also installed Visual Studio C++ Express to have a C++ compiler. As first step I'm trying to integrate jcc into python2.6 through the command: C:\Python26\python.exe setup.py build This gives me the following result: C:\Installfiles\pylucene-3.0.1-1\jcc>C:\Python26\python.exe setup.py build Traceback (most recent call last): File "setup.py", line 332, in <module> main('--debug' in sys.argv) File "setup.py", line 289, in main raise type(e), "%s: %s" %(e, args) WindowsError: [Error 2] Das System kann die angegebene Datei nicht finden: ['jav ac.exe', '-d', 'jcc/classes', 'java/org/apache/jcc/PythonVM.java', 'java/org/apa che/jcc/PythonException.java'] Other information: In systems I set: Uservariables: CLASSPATH C:\Programme\Java\jdk1.6.0_18\bin\javac.exe System Variables Path %SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem; C:\Programme\Java\jdk1.6.0_18\bin Where does the error come from and what do I have to do to overcome it?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >