Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 449/604 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Five Fake Sounds Engineered to Make Your Feel Better [Science]

    - by Jason Fitzpatrick
    As objects in our environment (like cars, ATMs, and phones) have grown lighter and quieter scientists have been carefully engineering their sounds so that they continue to sound like we expect them to. Read on to see how. At the design blog Humans Invent they share five interesting ways that the world around us is being engineered so it sounds the way we expect it to. They start with the example of the car door. Years ago cars were almost entirely steel, the doors were weighty, and when you slammed them it sounded like one big hunk of steel locking into another big hunk of steel (which, in fact, it was). Newer cars are lighter but people still crave that substantial clunk. Humans Invent highlights the effect of consumer desire: A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle. To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity. The effect is achieved in a range of different ways – car companies have piled up hundreds of patents on the subject – but usually involves some form of dampener fitted in the door cavity. Locking mechanisms are also tailored to produce the right sort of click and the way seals make contact is precisely controlled. On average it takes 1.8 seconds to close a car door but in that time you’re witnessing a strange kind of symphony composed by engineers and designers whose goal is to reassure you that its rock solid. They mention lock mechanisms, something you may never have thought about. A friend of mine had a Ford Focus some years ago and that particular model had electric locks that, instead of giving a satisfying thunk or solid click, made this horrible gates-of-the-prison-buzzing sound that was completely unnerving. Hit up the link below to see how sounds are engineered for car doors, electric motors, ATM machines, and more. 5 Fake Sounds Designed to Help Humans [Humans Invent via Boing Boing] How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • SO-Aware Service Explorer – Configure and Export your services from VS 2010 into the repository

    - by cibrax
    We have introduced a new Visual Studio tool called “Service Explorer” as part of the new SO-Aware SDK version 1.3 to help developers to configure and export any regular WCF service into the SO-Aware service repository. This new tool is a regular Visual Studio Tool Window that can be opened from “View –> Other Windows –> Services Explorer”. Once you open the Services Explorer, you will able to see all the available WCF services in the Visual Studio Solution. In the image above, you can see that a “HelloWorld” service was found in the solution and listed under the Tool window on the left. There are two things you can do for a new service in tool, you can either export it to SO-Aware repository or associate it to an existing service version in the repository. Exporting the service to SO-Aware means that you want to create a new service version in the repository and associate the WCF service WSDL to that version. Associating the service means that you want to use a version already created in SO-Aware with the only purpose of managing and centralizing the service configuration in SO-Aware. The option for exporting a service will popup a dialog like the one bellow in which you can enter some basic information about the service version you want to create and the repository location. The option for associating a service will popup a dialog in which you can pick any existing service version repository and the application configuration file that you want to keep in sync for the service configuration. Two options are available for configuring a service, WCF Configuration or SO-Aware. The WCF Configuration option just tells the tool that the service will use the standard WCF configuration section “system.serviceModel” but that section must be updated and kept in sync with the configuration selected for the service in the repository. The SO-Aware configuration option will tell the tool that the service configuration will be resolved at runtime from the repository. For example, selecting SO-Aware will generate the following configuration in the selected application configuration file, <configuration> <configSections> <section name="serviceRepository" type="Tellago.ServiceModel.Governance.ServiceConfiguration.ServiceRepositoryConfigurationSection, Tellago.ServiceModel.Governance.ServiceConfiguration" /> </configSections> <serviceRepository url="http://localhost/soaware/servicerepository.svc"> <services> <service name="ref:HelloWorldService(1.0)@dev" type="SOAwareSampleService.HelloWorldService" /> </services> </serviceRepository> </configuration> As you can see the tool represents a great addition to the toolset that any developer can use to manage and centralize configuration for WCF services. In addition, it can be combined with other useful tools like WSCF.Blue (Web Service Contract First) for generating the service artifacts like schemas, service code or the service WSDL itself.

    Read the article

  • TouchDevelop: The Fast Path to Windows 8 and Phone Apps

    - by Clint Edmonson
    Are you looking for a little extra cash for the upcoming holidays? Then you might be interested in creating some cool apps to sell in the Windows Store. Or maybe you’re simply curious and want to try your hand at developing for Windows 8 and Windows Phone. In either case, the newly released TouchDevelop Web App is for you. TouchDevelop Web App is a development environment to create apps on your tablet or smartphone, without requiring a separate PC. Scripts written by using TouchDevelop can access data, media, and sensors on the phone, tablet, and PC. The script can interact with cloud services, including storage, computing, and social networks. TouchDevelop lets you quickly create fun games and useful tools, turning your scripts into true Windows Phone and Windows 8 apps. A year ago, Microsoft Research released TouchDevelop for Windows Phone, which is being used by enthusiasts, students, and researchers to program their phones in fun, inventive, and interesting ways. These scripts are available at TouchDevelop for anyone to download and use. Ever since we released TouchDevelop, we’ve been eyeing the tablet form factor and working on a version for the browser. Now, with the release of TouchDevelop Web App, the wait is over: the tablet version is ready, so go play around with it. All TouchDevelop scripts that are developed on the smartphone can be downloaded to the tablet and run (if hardware allows). Any script that is developed on the tablet can also be accessed on the phone. And scripts can be converted to Windows Phone or Windows 8 apps and submitted to the Windows Phone Store or Windows Store, respectively. TouchDevelop Web App’s editor and programming language have been designed for tablet devices with touchscreens, but you can also use a keyboard and a mouse. So grab your web-enabled device and give the TouchDevelop Web App a try. It’s fun and easy, and could even put a little cash in your holiday-depleted wallet. Or at least give you bragging rights at family get-togethers. Are you interested in further tips on Windows 8 development?  Sign up for the 30 to launch program which will help you build a Windows Store application in 30 days.  You will receive a tip per day for 30 days, along with potential free design consultations and technical support from a Windows 8 expert. As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-28

    - by Bob Rhubart
    Oracle Magazine Technologist of the Year Awards to honor architects at #OOW12 Seven of the ten categories in this year's Oracle Magazine Technologist of the Year Awards are designated to celebrate architects. The winners will be honored at Oracle OpenWorld -- and showered with adulation from their colleagues. Nominations for these awards close on Tuesday July 17, so make sure you submit your nominations right away. Oracle E-Business Suite 12 Certified on Additional Linux Platforms (Oracle E-Business Suite Technology) Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on the following additional Linux x86/x86-64 operating systems: Oracle Linux 6 (32-bit), Red Hat Enterprise Linux 6 (32-bit), Red Hat Enterprise Linux 6 (64-bit), and Novell SUSE Linux Enterprise Server (SLES) version 11 (64-bit). FairScheduling Conventions in Hadoop (The Data Warehouse Insider)"If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go," says Dan McClary. Learn how in his technical post. SOA Learning Library (SOA & BPM Partner Community Blog) The Oracle Learning Library offers a vast collection of e-learning resources covering a mind-boggling array of products and topics. And it's all free—if you have an Oracle.com membership. And if you don't, that's free, too. Could this be any easier? Oracle Fusion Middleware Security: LibOVD: when and how | Andre Correa Fusion Middleware A-Team blogger Andre Correa offers some background on LibOVD and shares technical tips for its use. Virtual Developer Day: Oracle Fusion Development Yes, it's called "Developer Day," but there's plenty for architects, too. This free event includes hands-on labs, live Q&A with product experts, and a dizzying amount of technical information about Oracle ADF and Fusion Development -- all without having to pack a bag or worry about getting stuck in a seat between two professional wrestlers. Tuesday, July 10, 2012 9:00 a.m. PT – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Thought for the Day "Computers allow you to make more mistakes faster than any other invention in human history with the possible exception of handguns and tequila." — Mitch Ratcliffe Source: SoftwareQuotes.com

    Read the article

  • Layout of mathematical views (iOS)

    - by William Jockusch
    I am trying to figure out the right way to encapsulate graphical information about mathematical objects. It is not simple. For example, a matrix can include square brackets around its entries, or not. Some things carry down to sub-objects -- for example, a matrix might track the font size to be used by its entries. Similarly, the font color and the background color would carry down to the entries. Other things do not carry down. For example, the entries of the matrix do not need to know whether or not the matrix has those square brackets. Based on all of the above, I need to calculate sizes for everything, then frames. All of this can depend on the properties stored above. The size of a matrix depends on the sizes of its entries, and also on whether or not it has those brackets. What I am having a hard time with is not the individual ways to calculate sensible frames for this or that. It is the overall organizational structure of the whole thing. How can I keep track of it all without going crazy. One particular obstacle is worth mentioning -- for reasons I don't want to go into here, I need to calculate the sizes and frames for everything before I instantiate any actual views. So, for example, if I have a Matrix object, I need to calculate its size before I make a MatrixView. If I have an equation, I need to calculate the size of the view for the equation before I create the actual view. So I clearly need separate objects for those calculations. But I can't figure out a sensible class structure for those objects. If I put them all into a single class, I get some advantages because copying then becomes easy. But I also end up with a bloated class that contains info that is irrelevant for some objects -- such as whether or not to include those brackets around the matrix. But if I use a lot of different classes, copying properties becomes a real pain. If it matters, this is all in Objective C, for an iOS environment. Any pointers would be greatly appreciated.

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Creating a frozen bubble clone

    - by Vaughan Hilts
    This photo illustrates the environment: http://i.imgur.com/V4wbp.png I'll shoot the cannon, it'll bounce off the wall and it's SUPPOSED to stick to the bubble. It does at pretty much every other angle. The problem is always reproduced here, when hit off the wall into those bubbles. It also exists in other cases, but I'm not sure what triggers it. What actually happens: The ball will sometimes set to the wrong cell, and my "dropping" code will detect it as a loner and drop it off the stage. *There are many implementations of "Frozen Bubble" on the web, but I can't for the life of me find a good explanation as to how the algorithm for the "Bubble Sticking" works. * I see this: http://www.wikiflashed.com/wiki/BubbleBobble https://frozenbubblexna.svn.codeplex.com/svn/FrozenBubble/ But I can't figure out the algorithims... could anyone explain possibly the general idea behind getting the balls to stick? Code in question: //Counstruct our bounding rectangle for use var nX = currentBall.x + ballvX * gameTime; var nY = currentBall.y - ballvY * gameTime; var movingRect = new BoundingRectangle(nX, nY, 32, 32); var able = false; //Iterate over the cells and draw our bubbles for (var x = 0; x < 8; x++) { for (var y = 0; y < 12; y++) { //Get the bubble at this layout var bubble = bubbleLayout[x][y]; var rowHeight = 27; //If this slot isn't empty, draw if (bubble != null) { var bx = 0, by = 0; if (y % 2 == 0) { bx = x * 32 + 270; by = y * 32 + 45; } else { bx = x * 32 + 270 + 16; by = y * 32 + 45; } //Check var targetBox = new BoundingRectangle(bx, by, 32, 32); if (targetBox.intersects(movingRect)) { able = true; } } } } cellY = Math.round((currentBall.y - 45) / 32); if (cellY % 2 == 0) cellX = Math.round((currentBall.x - 270) / 32); else cellX = Math.round((currentBall.x - 270 - 16) / 32); Any ideas are very much welcome. Things I've tried: Flooring and Ceiling values Changing the wall bounce to a lower value Slowing down the ball None of these seem to affect it. Is there something in my math I'm not getting?

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • Support ARMv7 instruction set in Windows Embedded Compact applications

    - by Valter Minute
    On of the most interesting new features of Windows Embedded Compact 7 is support for the ARMv5, ARMv6 and ARMv7 instruction sets instead of the ARMv4 “generic” support provided by the previous releases. This means that code build for Windows Embedded Compact 7 can leverage features (like the FPU unit for ARMv6 and v7) and instructions of the recent ARM cores and improve their performances. Those improvements are noticeable in graphics, floating point calculation and data processing. The ARMv7 instruction set is supported by the latest Cortex-A8, A9 and A15 processor families. Those processor are currently used in tablets, smartphones, in-car navigation systems and provide a great amount of processing power and a low amount of electric power making them very interesting for portable device but also for any kind of device that requires a rich user interface, processing power, connectivity and has to keep its power consumption low. The bad news is that the compiler provided with Visual Studio 2008 does not provide support for ARMv7, building native applications using just the ARMv4 instruction set. Porting a Visual Studio “Smart Device” native C/C++ project to Platform Builder is not easy and you’ll lack many of the features that the VS2008 application development environment provides. You’ll also need access to the BSP and OSDesign configuration for your device to be able to build and debug your application inside Platform Builder and this may prevent independent software vendors from using the new compiler to improve their applications performances. Adeneo Embedded now provides a whitepaper and a Visual Studio plug-in that allows usage of the new ARMv7 enabled compiler to build applications inside Visual Studio 2008. I worked on the whitepaper and the tools, with the help of my colleagues and now the results can be downloaded from Adeneo Embedded’s website: http://www.adeneo-embedded.com/OS-Technologies/Windows-Embedded (Click on the “WEC7 ARMv7 Whitepaper tab to access the download links, free registration required) A very basic benchmark showed a very good performance improvement in integer and floating-point operations. Obviously your mileage may vary and we can’t promise the same amount of improvement on any application, but with a small effort on your side (even smaller if you use the plug-in) you can try on your own application. ARMv7 support is provided using Platform Builder’s compiler and VS2008 application debugger is not able to debut ARMv7 code, so you may need to put in place some workaround like keeping ARMv4 code for debugging etc.

    Read the article

  • Current State EA: Focus on the Integration!!!

    - by Eric A. Stephens
    A recent project has me at the front end of a large implementation effort covering multiple software components. In addition to the challenges of integrating 15-20 separate and new software components there is the challenge of integrating the portfolio into an existing environment. Like other clients I've worked with and other environments I've worked in for many years, this is typical. The applications are undocumented and under patched leading to a mystery for any architect leading change.  We can boil down most architecture development methodologies (ADM) into first understanding the current/baseline state and then envisioning one or more future states. Many pundits emphasize the need to focus on the future/target states. I agree since enterprise architecture (EA) is about where you are going and not so much where you have been. But to be effective in the future, I contend some focused time needs to be spent on the current state. And specifically on the integration. Integration is always the difficult part of a project (I might put it more coarsely at a cocktail party). While I don't have a case study, my anecdotal experience suggests poorly integrated application portfolios tend to cost more to operate and create entropy when trying to respond to new changes and opportunities. In the aforementioned project, I was able to get one of our EAs assigned to focus on just integration almost immediately. While we're still early in the process, this EA is uncovering all sorts of information that will greatly assist our future state planning for this solution. This information is driving early decision making that we anticipate will accelerate our efforts moving forward. #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Resources to Help You Getting Up To Speed on ADF Mobile

    - by Joe Huang
    Hi, everyone: By now, I hope you would have a chance to review the sample applications and try to deploy it.  This is a great way to get started on learning ADF Mobile.  To help you getting started, here is a central list of "steps to get up to speed on ADF Mobile" and related resources that can help you developing your mobile application. Check out the ADF Mobile Landing Page on the Oracle Technology Network. View this introductory video. Read this Data Sheet and FAQ on ADF Mobile. JDeveloper 11.1.2.3 Download. Download the generic version of JDeveloper for installation on Mac. Note that there are workarounds required to install JDeveloper on a Mac. Download ADF Mobile Extension from JDeveloper Update Center or Here. Please note you will need to configure JDeveloper for Internet access (In HTTP Proxy preferences) in order the install the extension, as the installation process will prompt you for a license that's linked off Oracle's web site. View this end-to-end application creation video. View this end-to-end iOS deployment video if you are developing for iOS devices. Configure your development environment, including location of the SDK, etc in JDeveloper-Tools-Preferences-ADF Mobile dialog box.  The two videos above should cover some of these configuration steps. Check out the sample applications shipped with JDeveloper, and then deploy them to simulator/devices using the steps outlined in the video above.  This blog entry outlines all sample applications shipped with JDeveloper. Develop a simple mobile application by following this tutorial. Try out the Oracle Open World 2012 Hands on Lab to get a sense of how to programmatically access server data.  You will need these source files. Ask questions in the ADF/JDeveloper Forum. Search ADF Mobile Preview Forum for entries from ADF Mobile Beta Testing participants. For all other questions, check out this exhaustive and detailed ADF Mobile Developer Guide. If something does not seem right, check out the ADF Mobile Release Note. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

  • The battle between Java vs. C#

    The battle between Java vs. C# has been a big debate amongst the development community over the last few years. Both languages have specific pros and cons based on the needs of a particular project. In general both languages utilize a similar coding syntax that is based on C++, and offer developers similar functionality. This being said, the communities supporting each of these languages are very different. The divide amongst the communities is much like the political divide in America, where the Java community would represent the Democrats and the .Net community would represent the Republicans. The Democratic Party is a proponent of the working class and the general population. Currently, Java is deeply entrenched in the open source community that is distributed freely to anyone who has an interest in using it. Open source communities rely on developers to keep it alive by constantly contributing code to make applications better; essentially they develop code by the community. This is in stark contrast to the C# community that is typically a pay to play community meaning that you must pay for code that you want to use because it is developed as products to be marketed and sold for a profit. This ties back into my reference to the Republicans because they typically represent the needs of business and personal responsibility. This is emphasized by the belief that code is a commodity and that it can be sold for a profit which is in direct conflict to the laissez-faire beliefs of the open source community. Beyond the general differences between Java and C#, they also target two different environments. Java is developed to be environment independent and only requires that users have a Java virtual machine running in order for the java code to execute. C# on the other hand typically targets any system running a windows operating system and has the appropriate version of the .Net Framework installed. However, recently there has been push by a segment of the Open source community based around the Mono project that lets C# code run on other non-windows operating systems. In addition, another feature of C# is that it compiles into an intermediate language, and this is what is executed when the program runs. Because C# is reduced down to an intermediate language called Common Language Runtime (CLR) it can be combined with other languages that are also compiled in to the CLR like Visual Basic (VB) .Net, and F#. The allowance and interaction between multiple languages in the .Net Framework enables projects to utilize existing code bases regardless of the actual syntax because they can be compiled in to CLR and executed as one codebase. As a software engineer I personally feel that it is really important to learn as many languages as you can or at least be open to learn as many languages as you can because no one language will work in every situation.  In some cases Java may be a better choice for a project and others may be C#. It really depends on the requirements of a project and the time constraints. In addition, I feel that is really important to concentrate on understanding the logic of programming and be able to translate business requirements into technical requirements. If you can understand both programming logic and business requirements then deciding which language to use is just basically choosing what syntax to write for a given business problem or need. In regards to code refactoring and dynamic languages it really does not matter. Eventually all projects will be refactored or decommissioned to allow for progress. This is the way of life in the software development industry. The language of a project should not be chosen based on the fact that a project will eventually be refactored because they all will get refactored.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • [EF + Oracle] Intro

    - by JTorrecilla
    Prologue I have a busy personal and working time, and at this moment that I start to get more free time, I decided to start a Serie about Entity Framework with Oracle. A few time ago, I got my first experience with EF and Oracle with Oracle 10 g express and Oracle 10 g with the same results, Doesn’t work. Now I download Oracle 11 g to Test again. Tools To start using EF with Oracle we need the following: 1. Visual Studio 2010. No Express Edition 2. Oracle 11g 3 Oracle Driver for EF (ODAC) Intro People, who are starting with EF developments, I recommend to take a look into Unai Zorrilla’s Blog, the post were written in Spanish but they are great! To this Serie, we are going to define the DB from the Oracle administrator. For that we need to follow the next steps: 1. Create a User with a PassWord. In my example the user will be Jtorrecilla 2. Create a TableSpace 3. Define some example tables   (Image1) When we have created the DB, we are going to start a new project in VS 2010. I will start a C# Project. To start with EF, we need to add a new objet to our Project “ADO .NET Entity Data Model". (Image2) The next step will be to indicate that our model will be based on an existing DB, and indicate the connection string (Images 3 and 4): (Imagen3) (Imagen4) Once we selected the connection string, we will need to indicate that in the connection will be saved “Sensitive” data (Image 5), and in the next step we are going to select the DB objets to use in the project(Image 6).   (Image 5) (Image 6) A the end of the selection, we will press Finish button, and it will generate a EDMX file to add to our solution, and in the IDE will appear the DB Schema with the selected Tables and Relations. (Imagen7) One Entity is composed by a set of properties (each matches with a column from the Table in the DB) and Navigation Properties that represents any relation with other Entities.   Finally With this chapter we have installed the environment, defined a DB and configured the solution to start using EF with Oracle. In the next chapter we are going to see What is a Entity and how it works. I hope you enjoy this Serie!

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >