Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 337/2875 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Unity3D - Android pause screen - double click issue

    - by user3666251
    I made a pause script for the game im developing for android. I added the script to the GUITexture I created and placed on the top right side of the screen.The issue stands at the part where if the player clicks the pause button then clicks resume then he wants to pause the game again.When he clicks pause the second time the buttons dont show up unless he clicks again. This is the script : #pragma strict var paused = false; var isButtonVisible : boolean = true; function OnMouseDown(){ this.paused = !this.paused; Time.timeScale = 0; isButtonVisible = true; } function OnGUI(){ if ( isButtonVisible ) { if(this.paused){ if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+3,200,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); Time.timeScale = 1; isButtonVisible = false; } if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2-50,200,50),"Resume")){ Time.timeScale = 1; isButtonVisible = false; } // Insert the rest of the pause menu logic if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+56,200,50),"Main Menu")){ Application.LoadLevel ("MainMenu"); isButtonVisible = false; Time.timeScale = 1; } } } } Thank you.

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • Linking Libraries in iOS?

    - by Bob Dole
    This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?

    Read the article

  • Resolution stuck in 640x480 in grub, 11.04 and 12.04

    - by user89797
    I have three operating systems on my machine, Windows 7x64, Ubuntu 11.10 and 12.04 both x64 as well. All three were running at full resolution for my monitor, as well as in the Grub 1.99 boot screen. After booting into Windows, I rebooted my machine and found my Grub resolution was suddenly 640x480. Booting into both versions of Ubuntu, I find myself stuck at that resolution as well. I made no driver changes recently, and hadn't even booted into the 11.10 build in a month or more. I've gone through both proprietary Nvidia driver options for my card (GeForce 9800GT) as well as the open source drivers in 12.04 to no avail. I can't figure out what could have caused this change in both versions of Ubuntu and Grub simultaneously. Windows 7 is unaffected so I think that safely rules out hardware failure. EDIT Ok, so I couldn't boot an graphical live disks, I tried ubuntu 12.04 i386 and x64 as well as 12.10 beta x64 and all of them would flash the initial logo, go to a blank screen with a flashing cursor in the upper left and then my display would die. I managed to boot 12.04 server and get into recovery. I reinstalled grub and went into recovery mode for my 12.04 build. If I boot in safe graphics mode I can get 1280x768, but as soon as I reboot it's broken again. I've tried reinstalling the nvidia drivers and that leaves me with a system stuck at max 640x480. None of these changes have had any impact on the 11.10 build, which is still stuck at 640x480 Given that I can push a somewhat higher resolution in 12.04, and full resolution in windows 7 I'm pretty convinced it's not an issue of my monitor failing. It must be something to do with the graphics drivers. I can't figure out what could be the issue though. I'm especially perplexed that I can't boot any live images

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • What impact would a young developer in a consultancy struggling on a project have?

    - by blade3
    I am a youngish developer (working for 3 yrs). I took a job 3 months ago as an IT consultant (for the first time, I'm a consultant). In my first project, all went will till the later stages where I ran into problems with Windows/WMI (lack of documentation etc). As important as it is to not leave surprises for the client, this did happen. I was supposed to go back to finish the project about a month and a half ago, after getting a date scheduled, but this did not happen either. The project (code) was slightly rushed too and went through QA (no idea what the results are). My probation review is in a few weeks time, and I was wondering, what sort of impact would this have? My manager hasn't mentioned this project to me and apart from this, everything's been ok and he has even said, at the beginning, if you are tight on time just ask for more, so he has been accomodating (At this time, I was doing well, the problems came later).

    Read the article

  • The Benefits of Using Online SEO Tools

    Search Engine Optimization (SEO) can be a labour intensive process. Why not save some time and effort and use a few online tools to help accomplish the task in less time? In this article, I will look at some common tools, and show how they can improve your optimization and save you time.

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • How do I maintain a really poorly written code base?

    - by onlineapplab.com
    Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? My plan is: Unify four existing versions into common code base (fixing only most obvious errors). Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time) All new functionality will be written as separate project. Step by step transfer existing functionality into the new project After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. Is such plan feasible at all? Another solution is to walk away and leave the headache to someone else.

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • Basic AppFabric Service Bus Programming Lifecycle

    - by kaleidoscope
    The tasks required to create an application that access the AppFabric Service Bus are as follows: Create a service namespace. This service namespace contains the resources used by the AppFabric Service Bus to support the application. Define the AppFabric Service Bus contract. A contract specifies the signature of the service, the data it exchanges, and other required inputs, behavior specifications, and object invariants. Implement the contract. To implement a service contract, create a class that implements the interface and specify custom runtime behaviors. Configure the service by specifying endpoint and other behavior information. Build and run the service. Build and run the client application. As with any iterative, service-oriented software development, it may not always be appropriate to follow the preceding steps sequentially, or even start from step 1. For example, if you want to build a client for a pre-existing service, you start at step 5. Or, if you are building a host service that others will use, you can skip step 6. Source: http://msdn.microsoft.com/en-us/library/ee173580.aspx   Sarang, K

    Read the article

  • Upcoming EBS Webcasts for June, July, August 2012

    - by user793553
    See the following upcoming webcasts for June, July and August 2012. Flag Doc ID 740966.1 as a favourite, to keep up to date with latest advisor schedule. Additionally, see Doc ID 740964.1 for access to all archived advisor webcasts Oracle E-Business Suite Oracle E-Business Suite Title Date Summary None at this time.     EBS Agile Title Date Summary None at this time.     EBS Applications Technologies Group (ATG) Title Date Summary EBS – OAM Tuning and Monitoring EMEA July 10, 2012 Abstract EBS – OAM Tuning and Monitoring US July 11, 2012 Abstract Workflow Analyzer Followup EMEA July 24, 2012 Abstract Workflow Analyzer Followup US July 25, 2012 Abstract EBS CRM & Industries Title Date Summary None at this time.     EBS Financials Title Date Summary EBS Fixed Assets: Achieve Success Using Proactive Tools For Fixed Assets Support July 10, 2012 Abstract Overview and Flow of Oracle Project Resource Management July 17, 2012 Abstract Leveraging My Oracle Support To Increase Knowledge July 30, 2012 Abstract EBS HCM (HRMS) Title Date Summary Oracle Time and Labor (OTL) Rollback Functionality Session 1 July 25, 2012 Abstract Oracle Time and Labor (OTL) Rollback Functionality Session 2 July 25, 2012 Abstract EBS Manufacturing Title Date Summary Using Personalization in Oracle eAM June 21, 2012 Abstract OM Guided Resolutions - Finding Known Resolutions Easily July 17, 2012 Abstract Material Move Orders Flow July 25, 2012 Abstract Diagnosing Signal 11 Issues In ASCP Planning August 9, 2012 Abstract Interface Trip Stop - Best Practices and Debugging August 21, 2012 Abstract EBS Procurement Title Date Summary Punchout in iProcurement June 26, 2012 Abstract

    Read the article

  • Looking at EMEA and Telecommunications

    - by Brian Dayton
    With Summer holidays starting up we've been spending a lot of time speaking with our counterparts in EMEA. Often we talk about recent customer successes. One of my recent discoveries is this great video covering BT's move towards SOA and how this initiative not only accelerated order delivery time from 6 days to 6 minutes but created new revenue streams and reduced time to implementation.

    Read the article

  • Microsoft Azure Storage Queues Part 1: Getting Started

    Microsoft Azure Queues are a ready-to-use service that loosely connects components or applications through the cloud. This article is the first part in a five-part series about Microsoft Azure Cloud Services by Roman Schacherl. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • What would be the ideal SAS package to integrate with a Enterprise scale web application developed using Java

    - by Rajesh
    SAS-Java Connectivity and Integration for a enterprise class web application - I am trying to narrow down the available approaches to connect to SAS frm Java. I am asusming the following , please correct my assumption if you think is not correct SAS ACCESS (ODBC, JDBC) /SAS Share Net- Query SAS Datasets using a JDBC Model SAS/Intr Net (For Connectivity with Java and build small scale applications) SAS Integration Technology (For Connectivity with Java and build Distributed Java Applications) Now the scenario is i need to build a enterprise class web application using Java/J2EE and ensure this application talks to SAS for Querying SAS Datasets Execute SAS Programs and generate Reports I am looking for a cost effective and robust solution which will work in a Multi user environment.

    Read the article

  • Testing the load factor in my lab

    - by Ami Winter
    I am a system admin in a lab, I have ~90 computers in the lab and I want to check the load factor on them.. meaning, to check how many people are working on the computers hourly.. To see if I need to buy more computers or not. I am looking for a way to build a script to check if a computer is logged on or not.. (0 for log off - 1 for log on) After I will have this data, I know how to build a script to build me the graphs. All the computers are linked via a domain and most of them have windows XP (few windows 7) I'll be happy to get some help. Amihay

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • how to properly Install chromium from zip and make it the default browser

    - by ClarifyLinux
    Since the Chromium PPA is no longer maintained, for those of us preferring to use chromium over chrome, we have two options: Build and Install from Source Download either 'beta' or daily builds (in a zip file) Unfortunately for me, option 1 is overly complicated. I know how to compile most any other applications in Ubuntu but I've never been able to get chromium to build correctly. I am currently using option 2. In Chromium I have the Chromium Updater installed (http://goo.gl/ffAMy). This gives me quick access to the most recent 64bit versions. Once downloaded, I install to /home/myuser/opt/chrome-linux. From this directory I can run the chrome binary. It works perfectly except for the fact that I cannot get it to act as my default browser. I've tried, as root, installing the binary in /opt/chrome-linux/ with a symbolic link to the 'chrome' binary in /usr/bin. Unfortunately, this doesn't work as a non root user. So my question is - How do I properly install a downloaded chromium zip build so tht it's listed as an option for the default browser?

    Read the article

  • Automatically keep your local git repos clean

    - by kerry
    Most developers using git are probably aware of a command ‘git gc’ that has to be run from time to time when you notice your git commands are running a little slow. This command cleans up your git repo and makes sure everything is nice and tidy. If you have not run this command lately, you will notice a huge performance increase in your git commands after running. It’s a bit annoying to have to run this command when you notice that your git performance is suffering. The command also takes a while if you have not run it recently. With this in mind, I decided to create a method to automatically run this command from time to time. So I decided to overload cd similar to how rvm does. All you have to do is paste the method in your .profile file and it will run the command every time you enter a directory with a git repo. You’ll notice a little pause when entering the directory, it’s not insufferable but if you would prefer, you can add an & to the end of the command to have it run in the background. I chose the pause over the pid output of the background command. Here it is in all it’s glory. View the code on Gist.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >