Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 496/763 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations

    - by JuergenKress
    The list below only includes SOA presentations delivered or moderated by Oracle SOA Product Management. For a complete list of Oracle Open World 2012 presentations, please go here. Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge Using the Right Tools, Techniques, and Technologies for Integration Projects Administration and Management Essentials for Oracle SOA Suite 11g Extreme Performance and Scale Delivered by SOA on Oracle Exalogic Successful Application Integration and SOA Projects: Customer Panel How to Integrate Cloud Applications with Oracle SOA Suite Transforming the Utilities Industry with Oracle Fusion Middleware Cloud and On-Premises Applications Integration, Using Oracle Integration Adapters Delivering High Value B2B Gateways with Oracle SOA Suite 11g Implementing Successful Healthcare Applications with Oracle SOA Suite Migrating to Oracle SOA Suite: A Sun Java CAPS Customer Experience If Mobile Enablement Is on Your Mind, Oracle SOA Suite and Oracle Service Bus Can Help Building Shared Services Infrastructure with Oracle Service Bus: Customer Panel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,OOW presentations,OOW soa ppt,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Desktop Fun: Beaches Theme Wallpapers

    - by Asian Angel
    Is your vacation still a few weeks or months away? Make the waiting a little easier by adding some of that vacation scenery to your desktop with our Beaches Theme Wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                       For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Run Command Missing in Windows 7? Share High Res Photos using Divvyshot Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Oracle Enterprise Manager content at Collaborate 12 - the only user-driven and user-run Oracle conference

    - by Anand Akela
    From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12. This is one of the best opportunities for you to learn more about Oracle technology including Oracle Enterprise Manager. Here is a summary of an impressive line-up of Oracle Enterprise Manager related content at COLLABORATE 12. Customer Presentations Stability in Real World with SQL Plan Management Upgrading to Oracle Enterprise Manager 12c - Best Practices Making OEM Sing and Dance with EMCLI Oracle Real Application Testing: A look under the hood Optimizing Oracle E-Business Suite on Exadata Experiences with OracleVM 3 and Grid Control in an Oracle BIEE environment. Right Cloud-- How to Avoid the False Cloud by using Oracle Technologies Forgetting something? Standarize your database monitoring environment with Enterprise Manager 11g Implementing E-Business Suite R12 in a Federal Cloud - Lessons Learned Cloud Computing Boot Camp: New DBA Features in Oracle Enterprise Manager Cloud Control 12c Oracle Enterprise Manager 12c, Whats Changed, Whats New? Monitoring a WebCenter Content Deployment with Enterprise Manager Enterprise Manager 12c Cloud Control: New Features and Best Practices (for IOUG registrants only) Oracle Presentations Roadmap Session: Total Cloud Control with Oracle Enterprise Manager 12c Real World Performance (complimentary for IOUG registrants only) Database-as-a-Service: Enterprise Cloud in Three Simple Steps Bullet-proof Your Enterprise, SOA & Cloud Investments Using Oracle Enterprise Gateway What’s New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Exadata Boot Camp: Complete Oracle Exadata Management with Oracle Enterprise Manager Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • On contract work, obligations to said contract, and looking out for yourself…

    - by jlnorsworthy
    Without boring you all with details, my last two contract assignments were cut short; I was given 3 days notice on one, and 4 weeks notice on the other. Neither of these were due to performance – they both basically came down budget issues. On my second contract, I got the feeling that I may not have been a great place to stay for the duration of my contract. Because of money/time spent getting me in the door, and the possible negative effect of my employer/recruiter, I decided to stay at least for a few months (and start looking several weeks before the end of my supposedly “extendable” contract). These experiences have left me a little wary of contract work. It seems that if I land a bad contract, that my recruiter would take a hit (reputation or otherwise) if I quickly found another job. But on the other hand, the client company won’t think twice of ending the contract early for any reason. I know that the counter argument to this is “maybe your recruiter shouldn’t have put you into a crappy assignment”… either way, it seems that since I am relying on him to provide me with work, that I should try to not damage his reputation with client companies. I’m basically brand new to contracting (these were my first two contracts) so these concerns are new to me. TLDR: Is contract work, by its very nature, largely unstable? Am I worried too much about my recruiter? Should I be quicker to start looking for a new job even after just weeks at a new company (when the environment seems unstable)? If so, do I look through my recruiter or just find another position by any means necessary?

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Tuxedo 11gR1 Released

    - by todd.little
    I've been a little quiet the last several months as the Tuxedo team has been very busy. Today Oracle announced the 11gR1 release of the Tuxedo product family. This release includes updates to Tuxedo, TSAM, and SALT, as well as 3 new products that Oracle is announcing today. These 3 new products are the Oracle Tuxedo Application Runtime for CICS and Batch, Oracle Application Rehosting Workbench, and the Tuxedo JCA Adapter. By providing a CICS equivalent runtime and a rehosting workbench to automate the rehosting of COBOL CICS code, JCL procedures, data definitions, and data, Oracle has significantly lowered the effort and risk to rehost mainframe CICS and Batch applications onto the Tuxedo runtime on open systems. By moving off proprietary legacy mainframes, customers have experienced better performance and achieved a 50-80% lowering of their total cost of ownership. The rehosting tools allow the COBOL business logic to remain unchanged and automate the replacement of CICS statements with calls to Tuxedo. The rehosted code can then run on open systems 'as-is'. Users can still use the same TN3270 interfaces they are used to eliminating the need for retraining. Batch procedures can be run and managed under a JES2 like environment. For the first time, customers have the tools and enterprise class runtime environment to move their key legacy assets off the mainframe and on to distributed open systems whether the application uses 250 MIPS, 25,000 MIPS, or more. More on these exciting new options in additional blog entries.

    Read the article

  • Learning frameworks without learning languages

    - by Tom Morris
    I've been reading up on GUI frameworks including WPF, GTK and Cocoa (UIKit). I don't really do anything related to Windows (I'm a Mac and Linux guy) or .NET, but I'd like to be able to throw together GUIs for various operating systems. We are in the enviable position now of having high level scripting languages that work with all of the major GUI toolkits. If you are doing Linux GUI programming, you could use GTK in C, but why not just use PyGTK (or PyQt). Similarly, for Java, one can use JRuby. For Mac, there's MacRuby. And on .NET, there's IronRuby. This is all fine and good, and if you are building a serious project, there are tradeoffs that you might encounter when deciding whether to, say, build a WPF app in C# or in IronRuby, or whether you are going to use PyGTK or not. The subjective question I have is: what about learning those frameworks? Are there strong reasons why one should or should not learn something like WPF or Cocoa in a language one is familiar with rather than having to learn a new language as well? I'm not saying you should never learn the language. If you are building Windows applications and you don't know C#, that might be a bit of a problem. But do you think it is okay to learn the framework first? This is both a general question and a specific question. I've used some Cocoa classes from Ruby and Python using things like PyObjC and there always seems to be an impedance mismatch because of the way Objective C libraries get built. Experiences and strong opinions welcome!

    Read the article

  • Squeezing hardware

    - by [email protected]
    It's very common that high availability means duplicate hardware so costs grows up.Nowadays, CIOs and DBAs has the main challenge of reduce the money spent increasing the performance and the availability. Since Grid Infrastructure 11gR2, there is a new feature that helps them to afford this challenge: Server PoolsNow, in Grid Infrastructure 11gR2, you can define server pools across the cluster setting up the minimum number of servers, the maximum and how important is the pool.For example:Consider  that "Velasco, Boixeda & co"  has 3 apps in a 6 servers cluster.First One is the main core business appSecond one is Mid RangeAnd third it's a database not very important.We Define the following resource requirements for expected workload:1- Main App 2 servers required2- Mid Range App requires 1 server3- Is not a required app in case of disasterThe we define 3 server pools across the cluster:1- Main pool min two servers, max three servers, importance four2- Mid pool, min one server max two servers, importance two3- test pool,min zero servers, max one server, importance oneSo the initial configuration is:-Main pool has three servers-Mid pool has two servers-Test pool has one serverLogically, we can see the cluster like this:If any server fails, the following algorithm will be applied:1.-The server pool of least importance2.-IF server pools are of the same importance,   THEN then the Server Pool that has more than its defined minimum servers Is chosenHope it helps 

    Read the article

  • CRM@Oracle Series: CRM Analytics

    - by tony.berk
    What is the most important factor that leads to a successful CRM deployment? Is it the overall strategy, strong governance, defined processes or good data quality? Well, it's definitely a combination of all these, but the most important differentiator from our experience is Business Intelligence. Business Intelligence or Analytics is commonly mentioned as a key aspect to successful CRM and other enterprise deployments. The good news is that Oracle provides pre-built analytics dashboards, which provide real-time, actionable insight, and tools to build custom analyses. However, success with analytics, especially in a large enterprise, still requires a strong strategy, clean data for analysis, and performance. Today's CRM@Oracle slidecast covers Oracle's strategy, architecture and key success factors for deploying CRM Analytics internally at Oracle. CRM@Oracle: CRM Analytics Click here to learn more about Oracle CRM products and here to learn about Oracle Business Intelligence Applications. Have you read our other postings in the CRM@Oracle Series? If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, post a comment and we'll get it on our list.

    Read the article

  • CodePlex Daily Summary for Wednesday, March 24, 2010

    CodePlex Daily Summary for Wednesday, March 24, 2010New ProjectsC++ Sparse Matrix Package: This is a package containing sparse matrix operations like multiplication, addition, Cholesky decomposition, inversions and so on. It is a simple, ...Change Password Web Part for FBA-ADAM User: This web part enables users to change ADAM (Active Directory Application Mode) password from within a SharePoint Site Collection. It is compatible ...DAMAJ Investigator: The Purpose (Mission) The purpose of this project is to build a tool to help developers do rationale investigations. The tool should synthesize...DotNetWinService: DotNetWinService offers a very simple framework to declaratively implement scheduled task inside a Windows Service.internshipgameloft: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.JavaScript Grid: JavaScript grid make it easiser to display tabular data on web pages. Main benefits 1 - Smart scrolling: you can handle scrolling events to load...Mirror Testing Software: Program určený pre správu zariadenia na testovanie automobilových zrkadiel po opustení výrobnej linky. (tiež End of Line Tester). Vývoj prebieha v ...NPipeline: NPipeline is a .NET port of the Apache Commons Pipeline components. It is a lightweight set of utilities that make it simple to implement paralleli...Portable Contacts: .net implementation of the Portable Contacts 1.0 Draft C specification Random Projects: Some projects that I will be doing from now and on to next year.SmartInspect Unity Interception Extension: This a library to integrate and use the SmartInspect logging tool with the Unity dependency injection and AOP framework. Various attributes help yo...Table2Class: Table2Class is a solution to create .NET class files (in C# or VB.NET) from tables in database (Microsoft SQL Server or MySQL databases)UploadTransform: A project for the uploading and trasnformation of client data to a database backend Wikiplexcontrib: This is the contrib project for wikiplex.zevenseas Notifier: Little project that displays a notification on every page within a WebApplication of SharePoint. The message of the notification is centrally manag...New ReleasesAcceptance Test Excel Addin: 1.0.0.1: Fixed two bugs: 1) highlight incorrectly when data table has filter 2) crash when named range is invalidC++ Sparse Matrix Package: v1.0: Initial release. Read the README.txt file for more information.Change Password Web Part for FBA-ADAM User: Change Password Web Part for FBA-ADAM User: Usage Instruction Add following in your web.config under <appSettings> <add key="AdamServerName" value="Your Server Name" /> <add key="AdamSourc...CollectAndCrop: spring release: This release includes the YUI compressor for .net http://yuicompressor.codeplex.com/ There are 2 new properties: CompressCss a boolean that turns...EnhSim: Release v1.9.8.0: Release v1.9.8.0Flame Shock dots can now produce critical strikes Flame Shock dots are now affected by spell haste Searing Totem and Magma Totem we...EPiServer CMS Page Type Builder: Page Type Builder 1.2 Beta 1: First release that targets EPiServer CMS version 6. While it is most likely stable further testing is needed.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.6.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server New Features Improved performance. Named ranges Font-styling added to charts and ...Image Ripper: Image Ripper: Fetch HD photos from specific web galleries like a charm.IronRuby: 1.0 RC4: The IronRuby team is pleased to announce version 1.0 RC4! As IronRuby approaches the final 1.0, these RCs will contain crucial bug fixes and enhanc...IST435: AJAX Demo: Demo of AJAX Control Toolkit extenders.IST435: Representing Friendships: This sample is a quick'n'dirty demo of how you can implement the general concept of setting up Friendships among users based on the Membership Fram...JavaScript Grid: Initial release: Initial release contains all source codes and two exampleskdar: KDAR 0.0.17: KDAR - Kernel Debugger Anti Rootkit - npfs.sys, msfs.sys, mup.sys checks added - fastfat.sys FAST I/O table check addedMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.6 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...Mytrip.Mvc: Mytrip 1.0 preview 2: Article Manager Blog Manager EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ClientValidation ThemeNetBuildConfigurator: Using NetBuildConfigurator Screencast: A demo and Screencast of using BuildConfigurator.NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.120: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version provi...NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 1.9.0: 1.9.0 测试版本:NoteExpress 2.5.0.1147 #针对1147的改动Open NFe: DANFe v1.9.7: Envio de e-mailpatterns & practices - Windows Azure Guidance: Code drop 2: This is the first step in taking a-Expense to Windows Azure. Highlights of this release are: Use of SQL Azure as the backend store for applicatio...patterns & practices - Windows Azure Guidance: Music Store sample application: Music Store is the sample application included in the Web Client Guidance project. We modified it so it now has a real data access layer, uses most...Quick Anime Renamer: Quick Anime Renamer v0.2: Quick Anime Renamer v0.2 - updated 3/23/2010Fixed some painting errorsFixed tab orderRandom Projects: Simple Chat Script: This contains chat commands for CONSTRUCTION serversRapidshare Episode Downloader: RED v0.8.1: - Fixed numerous bugs - Added Next Episode feature - Made episode checking run in background thread - Extended both API's to be more versatile - Pr...Rapidshare Episode Downloader: RED v0.8.2: - Fixed the list to update air date automatically when checking for episodes availabilitySelection Maker: Selection Maker 1.3: New Features:Now the ListView can show Icon of files. Better performance while showing files in ListViewSprite Sheet Packer: 2.2 Release: Made generation of map file optional in sspack and UI Fixed bug with image/map files being locked after first build requiring a restart to build ...Table Storage Backup & Restore for Windows Azure: TableStorageBackup: Table Storage Backup & RestoreTable2Class: Table2Class v1.0: Download do Solution do Visual Studio 2008 com os seguintes projetos: Table2Class.ClassMaker Projeto Windows Form que contempla o Class Maker. Ta...VBScript Login Script Creator: Login Script Creator 1.5: Removed IE7 option. Removed Internet Explorer temporary internet files option. Added overlay option. Added additional redirects for My Photos, My ...VCC: Latest build, v2.1.30323.0: Automatic drop of latest buildXAML Code Snippets addin for Visual Studio 2010: First release: This version targets Visual Studio 2010 Release Candidate. Please consider this release as a Beta. Also provide feedback so that it can be improve...Zeta Long Paths: Release 2010-03-24: Added functions to get file owner, creation time, last access time, last write time.ZZZ CMS: Release 3.0.0: With mobile version of frontend.Most Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesFarseer Physics EngineBlogEngine.NETLINQ to TwitterFacebook Developer ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcelTable2Classpatterns & practices: Composite WPF and Silverlight

    Read the article

  • IASA South East Florida Chapter February Meeting Report

    - by Rainer Habermann
    IASA South East Florida Chapter – February Meeting The topic for our February chapter meeting was Legal Issues in IT. Ms. Kennedy, Intellectual Property Attorney with an active litigation, trademark and copyright practice, presented: How Google, Wal-Mart & Apple Make their Millions – The Secret Ingredient: Intellectual Property This topic initiated great interest and the meeting room at Microsoft Ft. Lauderdale filled up to the last seat. Most Architects, Engineers, and MBA’s are not aware about Intellectual Property, Basic Patent, Trademark, or legal issues related to the web. After clarifying the basic definitions, Ms. Kennedy explained in detail how intellectual property issues could make or break a company. Members had the opportunity at the end of the presentation to ask questions, discuss legal problems, and several members shared their experiences related to Intellectual Property and other IT related issues. If you want to protect your ideas and intellectual property, you have to be aware of the implications and need to take the right steps in order to protect them. All Chapter Members agreed that it was an outstanding and lively presentation. Ms. Kennedy presented high quality content and made participants aware of legal IT issues. In the name of all chapter members, thank you Ms. Kennedy for taking the time for this amazing presentation and to Quent Herschelman for hosting the meeting. Rainer Habermann President IASA South East Florida Chapter

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • JavaOne Latin America Schedule Changes For Thursday

    - by Tori Wieldt
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/12/javaone_latin_america_schedule_changes_for_thursday.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The good news: we've got LOTS of developers at JavaOne Latin America.The bad news: the rooms are too small to hold everyone! (we've heard you)The good news: selected sessions for Thursday have been moved larger rooms (the keynote halls) More good news: some sessions that were full from Wednesday will be repeated on Thursday. SCHEDULE CHANGES FOR THURSDAY, DECEMBER 9THNote: Be sure to check the schedule on site, there still may be some last minute changes. Session Name Speaker New Time/Room Ginga, LWUIT, JavaDTV and You 2.0 Dimas Oliveria Thursday, December 9, 11:15am - 12:00pm Auditorio 4 JavaFX do seu jeito: criando aplicativos JavaFX com linguagens alternativas Stephen Chin Thursday, December 9, 3:00pm - 3:45pm Auditorio 4 Automatizando sua casa usando Java; JavaME, JavaFX, e Open Source Hardware Vinicius Senger Thursday, December 9, 9:00am - 9:45am Auditorio 3 Construindo uma arquitetura RESTful para aplicacoes ricas com HTML 5 e JSF2 Raphael Helmonth Adrien Caetano Thursday, December 9, 5:15pm - 6:00pm Auditorio 2 Dicas eTruquies sobre performance em Java EE JPA e JSF Alberto Lemos e Danival Taffarel Calegari Thursday, December 9, 2:00pm - 2:45pm Auditorio 2 Escrevendo Aplicativos Multipatforma Incriveis Usando LWUIT Roger Brinkley Cancelled Platforma NetBeans: sem slide - apenas codigo Mauricio Leal Cancelled Escalando o seu AJAX Push com Servlet 3.0 Paulo Silveria Keynote Hall 9:00am - 9:45am Cobetura Completa de Ferramentas para a Platforma Java EE 6 Ludovic Champenois Keynote Hall 10:00am - 10:45am Servlet 3.0 - Expansivel, Assincrono e Facil de Usar Arun Gupta Keynote Hall 4:00pm - 4:45pm Transforme seu processo em REST com JAX-RS Guilherme Silveria Keynote Hall 5:00pm - 5:45pm The Future of Java Fabiane Nardon e Bruno Souza Keynote Hall 6:00pm - 6:45pm Thanks for your understanding, we are tuning the conference to make it the best JavaOne possible.

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • NRF Big Show 2011 -- Part 3

    - by David Dorf
    I'm back from the NRF show having been one of the lucky people who's flight was not canceled. The show was very crowded with a reported 20% increase in attendance and everyone seemed in high spirits. After two years of sluggish retail sales, things are really picking up and it was reflected in everyone's mood. The pop-up Disney Store in the Oracle booth was great and attracted lots of interest in their mobile POS. I know many attendees visited the Disney Store in Times Square to see the entire operation. It's an impressive two-story store that keeps kids engaged. The POS demonstration station, where most of our innovations were demoed, was always crowded. Unfortunately most of the demos used WiFi and the signals from other booths prevented anything from working reliably. Nevertheless, the demo team did an excellent job walking people through the scenarios and explaining how shopping is being impacted by mobile, analytics, and RFID. Big Show Links Disney uncovers its store magic Top 10 Things You Missed at the NRF Big Show 2011 Oracle Retail Stores Innovation Station at NRF Big Show 2011 (video) The buzz of the show was again around mobile solutions. Several companies are creating mobile POS using the iPod Touch, including integrations to Oracle POS for the following retailers: Disney Stores with InfoGain Victoria's Secret with InfoGain Urban Outfitters with Starmount The Gap with Global Bay Keeping with the mobile theme, the NRF release a revised version of their Mobile Blueprint at NRF. It will be posted to the NRF site very soon. The alternate payments section had a major rewrite that provides a great overview and proximity and remote payment technologies. NRF Mobile Blueprint Links New mobile blueprint provides fresh insights NRF Mobile Blueprint 2011 (slides) I hope to do some posts on some of the interesting companies I spoke with in the coming weeks.

    Read the article

  • Jumping Vs. Gravity

    - by PhaDaPhunk
    Hi i'm working on my first XNA 2D game and I have a little problem. If I jump, my sprite jumps but does not fall down. And I also have another problem, the user can hold spacebar to jump as high as he wants and I don't know how to keep him from doing that. Here's my code: The Jump : if (FaKeyboard.IsKeyDown(Keys.Space)) { Jumping = true; xPosition -= new Vector2(0, 5); } if (xPosition.Y >= 10) { Jumping = false; Grounded = false; } The really simple basic Gravity: if (!Grounded && !Jumping) { xPosition += new Vector2(1, 3) * speed; } Here's where's the grounded is set to True or False with a Collision Rectangle MegamanRectangle = new Rectangle((int)xPosition.X, (int)xPosition.Y, FrameSizeDraw.X, FrameSizeDraw.Y); Rectangle Block1Rectangle = new Rectangle((int)0, (int)73, Block1.Width, Block1.Height); Rectangle Block2Rectangle = new Rectangle((int)500, (int)73, Block2.Width, Block2.Height); if ((MegamanRectangle.Intersects(Block1Rectangle) || (MegamanRectangle.Intersects(Block2Rectangle)))) { Grounded = true; } else { Grounded = false; } The grounded bool and The gravity have been tested and are working. Any ideas why? Thanks in advance and don't hesitate to ask if you need another Part of the Code.

    Read the article

  • What needs to be done to port a Windows Flash game to Android?

    - by Russell Yan
    My team has developed a game using Flash (with Lua embeded) that targets Windows. And we want to port that game to Android (also iOS in the future). It's acceptable to rewrite the Flash part of client. And some other team in our company recommended Unity3D to replace the Flash part. Since we have no experience in Android/iOS development, we would need to learn some new tool/language anyway. So we would like that learning still be useful after the porting and when we starting a new game. Any thoughts? Edits: I think it is worth noticing the background of the game : the project is started by a tester and developer, both without knowing much about flex and actionscript. They built the game while learning, so most of the code is hard to maintain. I and other two developers joined the project after a year or so when one has leave (and the other be our manager). The other two developers are just graduates with little experience and little knowledge of flex. I am good at the server part and the c# language. Based on the fact that the code is hard to maintain (and we do need to change a lot of code to make the game easier to playe in a mobile device), and I am good at c# (and learning). I still tend to do the porting with Unity, which could get better performance and possibly save time.

    Read the article

  • LobsterPot Solutions in the USA

    - by Rob Farley
    We’re expanding! I’m thrilled to announce that Microsoft Gold Partner LobsterPot Solutions has started another branch appointing the amazing Ted Krueger (5-time SQL MVP awardee) as the US lead. Ted is well-known in the SQL Server world, having written books on indexing, consulting and on being a DBA (not to mention contributing chapters to both MVP Deep Dives books). He is an expert on replication and high availability, and strong in the Business Intelligence space – vast experience which is both broad and deep. Ted is based in the south east corner of Wisconsin, just north of Chicago. He has been a consultant for eons and has helped many clients with their projects and problems, taking the role as both technical lead and consulting lead. He is also tireless in supporting and developing the SQL Server community, presenting at conferences across America, and helping people through his blog, Twitter and more. Despite all this – it’s neither his technical excellence with SQL Server nor his consulting skill that made me want him to lead LobsterPot’s US venture. I wanted Ted because of his values. In the time I’ve known Ted, I’ve found his integrity to be excellent, and found him to be morally beyond reproach. This is the biggest priority I have when finding people to represent the LobsterPot brand. I have no qualms in recommending Ted’s character or work ethic. It’s not just my thoughts on him – all my trusted friends that know Ted agree about this. So last week, LobsterPot Solutions LLC was formed in the United States, and in a couple of weeks, we will be open for business! LobsterPot Solutions can be contacted via email at [email protected], on the web at either www.lobsterpot.com.au or www.lobsterpotsolutions.com, and on Twitter as @lobsterpot_au and @lobsterpot_us. Ted Kruger blogs at LessThanDot, and can also be found on Twitter and LinkedIn. This post is cross-posted from http://lobsterpotsolutions.com/lobsterpot-solutions-in-the-usa

    Read the article

  • Ignoring Robots - Or Better Yet, Counting Them Separately

    - by [email protected]
    It is quite common to have web sessions that are undesirable from the point of view of analytics. For example, when there are either internal or external robots that check the site's health, index it or just extract information from it. These robotic session do not behave like humans and if their volume is high enough they can sway the statistics and models.One easy way to deal with these sessions is to define a partitioning variable for all the models that is a flag indicating whether the session is "Normal" or "Robot". Then all the reports and the predictions can use the "Normal" partition, while the counts and statistics for Robots are still available.In order for this to work, though, it is necessary to have two conditions:1. It is possible to identify the Robotic sessions.2. No learning happens before the identification of the session as a robot.The first point is obvious, but the second may require some explanation. While the default in RTD is to learn at the end of the session, it is possible to learn in any entry point. This is a setting for each model. There are various reasons to learn in a specific entry point, for example if there is a desire to capture exactly and precisely the data in the session at the time the event happened as opposed to including changes to the end of the session.In any case, if RTD has already learned on the session before the identification of a robot was done there is no way to retract this learning.Identifying the robotic sessions can be done through the use of rules and heuristics. For example we may use some of the following:Maintain a list of known robotic IPs or domainsDetect very long sessions, lasting more than a few hours or visiting more than 500 pagesDetect "robotic" behaviors like a methodic click on all the link of every pageDetect a session with 10 pages clicked at exactly 20 second intervalsDetect extensive non-linear navigationNow, an interesting experiment would be to use the flag above as an output of a model to see if there are more subtle characteristics of robots such that a model can be used to detect robots, even if they fall through the cracks of rules and heuristics.In any case, the basic and simple technique of partitioning the models by the type of session is simple to implement and provides a lot of advantages.

    Read the article

  • Oracle TechCast Live: "MySQL 5.5 Does Windows"

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Interested in MySQL on Windows? Join our next Oracle TechCast Live on Tuesday January 11th at 10.00 am PT! MySQL Product Manager Mike Frank will then tell you all about the major MySQL 5.5 performance gains on Windows.   In case you're not familiar with the Oracle TechCast Live events, they're akin to online "fireside chats" with experts about new tools, technologies and trends in application development. They also include live Q&A sessions, and you can ask questions via Twitter & Facebook. You can check out a few archived sessions here.   Get ready to ask your questions to Mike!   We hope many of you will join.

    Read the article

  • Make it simple. Make it work.

    - by Sean Feldman
    In 2010 I had an experience to work for a business that had lots of challenges. One of those challenges was luck of technical architecture and business value recognition which translated in spending enormous amount of manpower and money on creating C++ solutions for desktop client w/o using .NET to minimize “footprint” (2#) of the client application in deployment environments. This was an awkward experience, considering that C++ custom code was created from scratch to make clients talk to .NET backend while simple having .NET as a dependency would cut time to market by at least 50% (and I’m downplaying the estimate). Regardless, recent Microsoft announcement about .NET vNext has reminded me that experience and how short sighted architecture at that company was. Investment made into making C++ client that cannot be maintained internally by team due to it’s specialization in .NET have created a situation where code to maintain will be more brutal over the time and  number of developers understanding it will be going and shrinking. Not only that. The ability to go cross-platform (#3) and performance achievement gained with native compilation (#1) would be an immediate pay back. Why am I saying all this? To make a simple point to myself and remind again – when working on a product that needs to get to the market, make it simple, make it work, and then see how technology is changing and how you can adopt. Simplicity will not let you down. But a complex solution will always do.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >