Search Results

Search found 7291 results on 292 pages for 'runtime exec'.

Page 213/292 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Splitting an MP4 file

    - by Asaf Chertkoff
    what is the fastest and less resource consuming method for splitting an MP4 file? @Alex: it didn't work, i don't know why. see the out put here: asafche@asafche-laptop:~$ ffmpeg -vcodec copy -ss 0 -t 00:10:00 -i /home/asafche/Videos/myVideos/MAH00124.MP4 /home/asafche/Videos/myVideos/eh.mp4 FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 31 2011 18:53:20, gcc: 4.4.3 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/asafche/Videos/myVideos/MAH00124.MP4': Duration: 00:15:35.96, start: 0.000000, bitrate: 5664 kb/s Stream #0.0(und): Video: h264, yuv420p, 1280x720, 59.94 tbr, 59.94 tbn, 119.88 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to '/home/asafche/Videos/myVideos/eh.mp4': Stream #0.0(und): Video: libx264, yuv420p, 1280x720, q=2-31, 90k tbn, 59.94 tbc Stream #0.1(und): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Unsupported codec for output stream #0.1 it says something about different frame rate...

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • Windows8, JavaScript and HTML5 - A good thing?

    - by Albers
    Most of us have seen the Windows 8 news regarding support for native HTML5/JavaScript applications. The press has pushed this as a potential threat to the .NET developer community because JavaScript and HTML5 were called "our new developer platform". The press release refers to "Web-connected and Web-powered apps built using HTML5 and JavaScript that have access to the full power of the PC.".Microsoft has also been hush on details related to these comments. Before we buy the hype and start worrying about a world where we drop our Visual Studio licenses and buy DreamWeaver - let's think about how Windows 8 HTML/JavaScript applications would be implemented. The HTML5 spec offers support for offline applications, but this won't offer the OS-integrated experience the press release refers to. MS has to be planning a way to extend access beyond the traditional JavaScript feature set. Microsoft has a similar option today: HTML Applications or HTAs. They come close to required features, but HTAs need ActiveX or Java integration to provide the promised OS-level access. I'm guessing that Microsoft's future OS strategy isn't built on developers cranking out ActiveX controls or Java applets. So where is Microsoft headed? One possibility is that MS builds a new JavaScript framework from the ground up outside their current APIs. Another idea would be for Microsoft to add support for JavaScript as a first class .NET language using the Dynamic Language Runtime. A solution based on the DLR could be integrated into an HTA-like model to provide the promised access, along with the full range of features in .NET Framework. Security comes included in the Framework. And the work necessary to support this integration would tie in nicely with the effort MS has recently made providing better JavaScript and HTML5 support in Visual Studio 2010. As a bonus, a full-fledged JavaScript DLR implementation would allow single language web solutions across client and server (think node.js) and would appeal to developers who are familiar with JavaScript but have less experience with the Microsoft tech stack. We will all get a better picture after the Build conference in September. But in the mean time we know that Microsoft has a reputation for providing strong developer support. We might want to reserve our harshest judgement and consider that the press release could hint at new opportunities for .NET development.

    Read the article

  • NHibernate and Stored Procedures in C#

    - by Jess Nickson
    I was recently trying and failing to set up NHibernate (v1.2) in an ASP.NET project. The aim was to execute a stored procedure and return the results, but it took several iterations for me to end up with a working solution. In this post I am simply trying to put the required code in one place, in the hope that the snippets may be useful in guiding someone else through the same process. As it is kind’ve the first time I have had to play with NHibernate, there is a good chance that this solution is sub-optimal and, as such, I am open to suggestions on how it could be improved! There are four code snippets that I required: The stored procedure that I wanted to execute The C# class representation of the results of the procedure The XML mapping file that allows NHibernate to map from C# to the procedure and back again The C# code used to run the stored procedure The Stored Procedure The procedure was designed to take a UserId and, from this, go and grab some profile data for that user. Simple, right? We just need to do a join first, because the user’s site ID (the one we have access to) is not the same as the user’s forum ID. CREATE PROCEDURE [dbo].[GetForumProfileDetails] ( @userId INT ) AS BEGIN SELECT Users.UserID, forumUsers.Twitter, forumUsers.Facebook, forumUsers.GooglePlus, forumUsers.LinkedIn, forumUsers.PublicEmailAddress FROM Users INNER JOIN Forum_Users forumUsers ON forumUsers.UserSiteID = Users.UserID WHERE Users.UserID = @userId END I’d like to make a shout out to Format SQL for its help with, well, formatting the above SQL!   The C# Class This is just the class representation of the results we expect to get from the stored procedure. NHibernate requires a virtual property for each column of data, and these properties must be called the same as the column headers. You will also need to ensure that there is a public or protected parameterless constructor. public class ForumProfile : IForumProfile { public virtual int UserID { get; set; } public virtual string Twitter { get; set; } public virtual string Facebook { get; set; } public virtual string GooglePlus { get; set; } public virtual string LinkedIn { get; set; } public virtual string PublicEmailAddress { get; set; } public ForumProfile() { } }   The NHibernate Mapping File This is the XML I wrote in order to make NHibernate a) aware of the stored procedure, and b) aware of the expected results of the procedure. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="[namespace]" assembly="[assembly]"> <sql-query name="GetForumProfileDetails"> <return-scalar column="UserID" type="Int32"/> <return-scalar column="Twitter" type="String"/> <return-scalar column="Facebook" type="String"/> <return-scalar column="GooglePlus" type="String"/> <return-scalar column="LinkedIn" type="String"/> <return-scalar column="PublicEmailAddress" type="String"/> exec GetForumProfileDetails :UserID </sql-query> </hibernate-mapping>   Calling the Stored Procedure Finally, to bring it all together, the C# code that I used in order to execute the stored procedure! public IForumProfile GetForumUserProfile(IUser user) { return NHibernateHelper .GetCurrentSession() .GetNamedQuery("GetForumProfileDetails") .SetInt32("UserID", user.UserID) .SetResultTransformer( Transformers.AliasToBean(typeof (ForumProfile))) .UniqueResult<ForumProfile>(); } There are a number of ‘Set’ methods (i.e. SetInt32) that allow you specify values for any parameters in the procedure. The AliasToBean method is then required to map the returned scalars (as specified in the XML) to the correct C# class.

    Read the article

  • ODI 11g - Cleaning control characters and User Functions

    - by David Allan
    In ODI user functions have a poor name really, they should be user expressions - a way of wrapping common expressions that you may wish to reuse many times - across many different technologies is an added bonus. To illustrate look at the problem of how to remove control characters from text. Users ask these types of questions over all technologies - Microsoft SQL Server, Oracle, DB2 and for many years - how do I clean a string, how do I tokenize a string and so on. After some searching around you will find a few ways of doing this, in Oracle there is a convenient way of using the TRANSLATE and REPLACE functions. So you can convert some text using the following SQL; replace( translate('This is my string'||chr(9)||' which has a control character', chr(3)||chr(4)||chr(5)||chr(9), chr(3) ), chr(3), '' ) If you had many columns to perform this kind of transformation on, in the Oracle database the natural solution you'd go to would be to code this as a PLSQL function since you don't want the code splattered everywhere. Someone tells you that there is another control character that needs added equals a maintenance headache. Coding it as a PLSQL function will incur a context switch between SQL and PLSQL which could prove costly. In ODI user functions let you capture this expression text and reference it many times across your mappings. This will protect the expression from being copy-pasted by developers and make maintenance much simpler - change the expression definition in one place. Firstly define a name and a syntax for the user function, I am calling it UF_STRIP_BAD_CHARACTERS and it has one parameter an input string;  We then can define an implementation for each technology we will use it, I will define Oracle's using the inputString parameter and the TRANSLATE and REPLACE functions with whatever control characters I want to replace; I can then use this inside mapping expressions in ODI, below I am cleaning the ENAME column - a fabricated example but you get the gist.  Note when I use the user function the function name remains in the text of the mapping, the actual expression is not substituted until I generate the scenario. If you generate the scenario and export the scenario you can have a peak at the code that is processed in the runtime - below you can see a snippet of my export scenario;  That's all for now, hopefully a useful snippet of info.

    Read the article

  • StreamInsight 1.0 Released

    - by Roman Schindlauer
    One piece in the set of products offered in SQL Server 2008 R2 that has generated a lot of buzz and interest during its CTP phase is StreamInsight, Microsoft’s platform for Complex Event Processing. Microsoft’s information platform vision provides enterprises with a “complete approach” to managing information assets, enabling all businesses to gain strategic value from information from the desktop to the datacenter to the cloud. And StreamInsight V1 is one essential piece in this spectrum. After more than a year of blood, sweat, tears, and insane amounts of coffee we are proud to release the first version of our Complex Event Processing Framework.   Those of you who have been following our Community Technology Previews (CTPs) throughout last year have already had the possibility to familiarize themselves with the product. Early feedback was not only incredibly positive, but also very constructive and strongly influenced the final feature set. Four notable increments over our last public CTP are: Count windows Non-occurrence detection (Anti-Join) Dynamic query composition at runtime Synchronize time across input streams Additionally, many smaller issues and bugs were addressed. A few APIs slightly changed with respect to the November CTP, but porting your application to RTM should not require a lot of effort.   Here are the (english) bits - choosing the evaluation license during setup lets you already play with this version. Before you install, make sure to uninstall any previous CTP version:   StreamInsight X86StreamInsight X64   Within a few days, we will update our product page and add download links and instructions there as well.   The StreamInsight documentation is provided through a help file as part of the installation as well as through Books Online on MSDN. We also invite you to visit the StreamInsight Blog and the StreamInsight Forum, which is a great place to discuss questions and issues with the community and the development team.   Regards,Roman Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • How to integrate Java ME SDK 3.2 with NetBeans

    - by SungmoonCho
    Many people like to use Java ME SDK with IDEs. We provided instructions on how to integrate the SDK with NetBeans through the download page, and also through the release note, however, let me explain it here once again with some screen shots. 1. Download Java ME SDK and NetBeans plugin from here. 2. Install Java ME SDK first. You will have the emulator and the runtime on your machine. Also please unarchive the NetBeans plugin somewhere. 3. Launch NetBeans. 4. Go to "Tools" - "Plugins". 5. Check out the "Installed" tab. Check "Show details". If you see the previous version of Java ME SDK Tools installed already. Check those to uninstall them. 6. Go to "Settings" tab. 7. Click "Add", and provide the location of NetBeans plugin. In my case, it is "file:/C:/Users/sungcho/Downloads/nb-me-sdk-plugins-uc/updates.xml". Don't forget to add "updates.xml" at the end. 8. Click "Okay" 9. Click "Available Plugins" tab. 10. If you scroll down, you will see three Java ME SDK Tools. Check "Java ME SDK Tools" plugin. Also check others as you desire. 11. Follow the instruction and install them. 12. Restart NetBeans 13. That is it. Done. Now you will see Oracle Java ME SDK 3.2 in your Java Platform list.

    Read the article

  • Referencing a picture in another DLL in Silverlight and Windows Phone 7

    - by Laurent Bugnion
    This one has burned me a few times, so here is how it works for future reference: Usually, when I add an Image control into a Silverlight application, and the picture it shows is local (as opposed to loaded from the web), I set the picture’s Build Action to Content, and the Copy to Output Directory to Copy if Newer. What the compiler does then is to copy the picture to the bin\Debug folder, and then to pack it into the XAP file. In XAML, the syntax to refer to this local picture is: <Image Source="/Images/mypicture.jpg" Width="100" Height="100" /> And in C#: return new BitmapImage(new Uri( "/Images/mypicture.jpg", UriKind.Relative)); One of the features of Silverlight is to allow referencing content (pictures, resource dictionaries, sound files, movies etc…) located in a DLL directly. This is very handy because just by using the right syntax in the URI, you can do this in XAML directly, for example with: <Image Source="/MyApplication;component/Images/mypicture.jpg" Width="100" Height="100" /> In C#, this becomes: return new BitmapImage(new Uri( "/MyApplication;component/Images/mypicture.jpg", UriKind.Relative)); Side note: This kind of URI is called a pack URI and they have been around since the early days of WPF. There is a good tutorial about pack URIs on MSDN. Even though it refers to WPF, it also applies to Silverlight Side note 2: With the Build Action set to Content, you can rename the XAP file to ZIP, extract all the files, change the picture (but keep the same name), rezip the whole thing and rename again to XAP. This is not possible if the picture is embedded in an assembly! So what’s the catch? Well the catch is that this does not work if you set the Build Action to Content. It’s actually pretty simple to explain: The pack URI above tells the Silverlight runtime to look within an assembly named MyOtherAssembly for a file named MyPicture.jpg in the Images folder. If the file is included as Content, however, it is not in the assembly. Silverlight does not find it, and silently returns nothing. The image is not displayed. And the fix? The fix, for class libraries, is to set the Build Action to Resource. With this, the picture will gets packed into the DLL itself. Of course, this will increase the size of the DLL, and any change to the picture will require recompiling the class library, which is not ideal. But in the cases where you want to distribute pictures (icons etc) together with a plug-in assembly, well, this is a good way to have everything in the same place Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Solution for developers wanting to run a standalone WLS 10.3.6 server against JDev 11.1.1.6.0

    - by Chris Muir
    In my previous post I discussed how to install the 11.1.1.6.0 ADF Runtimes into a standalone WLS 10.3.6 server by using the ADF Runtime installer, not the JDeveloper installer.  Yet there's still a problem for developers here because JDeveloper 11.1.1.6.0 comes coupled with a WLS 10.3.5 server.  What if you want to develop, deploy and test with a 10.3.6 server?  Have we lost the ability to integrate the IDE and the WLS server where we can run and stop the server, deploy our apps automatically the server and more? JDeveloper actually solved this issue sometime back but not many people will have recognized the feature for what it does as it wasn't needed until now. Via the Application Server Navigator you can create 2 types of connections, one to a remote "standalone WLS" and another to an "integrated WLS".  It's this second option that is useful because what we can do is install a local standalone WLS 10.3.6 server on our developer PC, then create a separate "integrated WLS" connection to the standalone server.  Then by accessing your Application's properties through the Application menu -> Application Properties -> Run -> Bind to Integration Application Server option we can choose the newly created WLS server connection to work with our application. In this way JDeveloper will now treat the new server as if it was the integrated WLS.  It will start when we run and deploy our applications, terminate it at request and so on.  Of course don't forget you still need to install the ADF Runtimes for the server to be able to work with ADF applications. Note there is bug 13917844 lurking in the Application Server Navigator for at least JDev 11.1.1.6.0 and earlier.  If you right click the new connection and select "Start Server Instance" it will often start one of the other existing connections instead (typically the original IntegratedWebLogicServer connection).  If you want to manually start the server you can bypass this by using the Run menu -> Start Server Instance option which works correctly.

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • Who should control navigation in an MVVM application?

    - by SonOfPirate
    Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?

    Read the article

  • Would someone please explain Octree Collisions to me?

    - by A-Type
    I've been reading everything I can find on the subject and I feel like the pieces are just about to fall into place, but I just can't quite get it. I'm making a space game, where collisions will occur between planets, ships, asteroids, and the sun. Each of these objects can be subdivided into 'chunks', which I have implemented to speed up rendering (the vertices can and will change often at runtime, so I've separated the buffers). These subdivisions also have bounding primitives to test for collision. All of these objects are made of blocks (yeah, it's that kind of game). Blocks can also be tested for rough collisions, though they do not have individual bounding primitives for memory reasons. I think the rough testing seems to be sufficient, though. So, collision needs to be fairly precise; at block resolution. Some functions rely on two blocks colliding. And, of course, attacking specific blocks is important. Now what I am struggling with is filtering my collision pairs. As I said, I've read a lot about Octrees, but I'm having trouble applying it to my situation as many tutorials are vague with very little code. My main issues are: Are Octrees recalculated each frame, or are they stored in memory and objects are shuffled into different divisions as they move? Despite all my reading I still am not clear on this... the vagueness of it all has been frustrating. How far do Octrees subdivide? Planets in my game are quite large, while asteroids are smaller. Do I subdivide to the size of the planet, or asteroid (where planet is in multiple divisions)? Or is the limit something else entirely, like number of elements in the division? Should I load objects into the octrees as 'chunks' or in the whole, then break into chunks later? This could be specific to my implementation, I suppose. I was going to ask about how big my root needed to be, but I did manage to find this question, and the second answer seems sufficient for me. I'm afraid I don't really get what he means by adding new nodes and doing subdivisions upon adding new objects, probably because I'm confused about whether the tree is maintained in memory or recalculated per-frame.

    Read the article

  • Visual WebGui's XAML based programming for web developers

    - by Webgui
    While ASP.NET provides an event base approach it is completely dismissed when working with AJAX and the richness of the server is lost and replaced with JavaScript programming and couple with a very high security risk. Visual WebGui reinstates the power of the server to AJAX development and provides a statefull yet scalable, server centric architecture that provides the benefits and user productivity of AJAX with the security and developer productivity we had before AJAX stormed into our lives. "When I first came up with the concept of Visual WebGui , I was frustrated by the fragile and complex nature of developing web applications. The contrast in productivity between working in a fully OOP compiled environment vs. scripting even today, with JQuery, Dojo and such, is still huge. Even today the greatest sponsor of JavaScript programming, Google, is offering a framework to avoid JavaScript using Java that compiles to JavaScript (GWT). So I decided to find a way to abstract the complexity or rather delegate the complex job to enable developers to concentrate on the “What” instead of the “How” and embraced the Form based approach," said Guy Peled the inventor of Visual WebGui. Although traditional OOP development still rules the enterprise, the differences between web sites and web applications have blurred and so did the differences between classic developers and web developers. As a result, we now see declarative languages in desktop / backend development environments (WPF / WF) and we see OOP, gaining more and more power in web development (ASP.NET MVC / ASP.NET DOM). However, what has not changed is enterprise need for security, development ROI, reach, highly responsive and interactive UIs and scalability. The advantages that declarative languages and 'on demand' compilation provide over classic development are mostly the flexibility and a more readable initialize component it offers which is what Gizmox is aspiring to do by replacing the designer initialize component with XAML code. The code in this new project template will be compiled on demand using the build provider mechanism ASP.NET has. This means that the performance hit is only on the first request and after that the performance is the same as a prebuilt solution. This will allow the flexibility of a dynamically updated sites and the power of fully blown enterprise applications over web. You can also use prebuilt features available in ASP.NET to enjoy both worlds in production. VWG XAML implementation (VWG Sites) will be the first truly compliable XAML implementation as Microsoft implemented Silverlight and WPF as a runtime markup interpretation opposed to the ASP.NET markup implementation which is compiled to CLR code once. We have chosen to implement the VWG Sites parser as a different way to create CLR code that provides greater performance over the reflection alternative. VWG Sites will also be the first server side XAML UI engine which, while giving the power of XAML, it will not require any plug-ins or installations on the client side. Short demo video of VWG Sites markup. There is also a live sample available here.

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • Java JRE 1.6.0_35 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    The latest Java Runtime Environment 1.6.0_35 (a.k.a. JRE 6u35-b10) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_35?See the 1.6.0_35 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: URGENT BULLETIN: All E-Business Suite End-Users Must Manually Apply JRE 6 Updates References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >