Search Results

Search found 20573 results on 823 pages for 'pretty big scheme'.

Page 402/823 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

  • Do you know about the Visual Studio 2010 Architecture Guidance?

    - by Martin Hinshelwood
    If you have not seen the Visual Studio 2010 Architectural Guidance from the Visual Studio ALM Rangers then you are missing out. I have been spelunking the TFS Guidance recently and I discovered the Visual Studio 2010 Architectural Guidance. This is not an in-depth look at the capabilities of the architectural tools that shipped with Visual Studio 2010 Ultimate, but is instead a set of samples that lead you by example through real world scenarios. There is practical guidance and checklists to help guide lead developers and architects through the common challenges in understanding both existing and new applications. The content concentrates on practical guidance for Visual Studio 2010 Ultimate and is focused on modelling tools. There is integration into Visual Studio so all you need to do to access it is select “Architecture | Visual Studio ALM Rangers – Architecture Guidance”. Figure: Accessing the Architecture guidance is easy This brings up an inline version of the documentation and a kind of Explorer that lets you pick the tasks you want to perform and takes you strait to that part of the Guidance. Figure: Access the Guidance from right within Visual Studio 2010 This is a big help when you just want to figure out how to do something and can’t be bothered searching for and through the content in the provided Word documents. The Question and Answer section is full of useful content and there are six Hands-On-Labs to sink your teeth into: Creating extensions with the feature extension Explore an Existing System Scenario Extensibility Layer Diagrams New Solution Scenario Reusable Architecture Scenario Validation an Architecture Scenario I’m sold! Where can i get my hands on this fantastic content? Download the Visual Studio 2010 Architecture Tooling Guidance and if you like it don’t forget to add a review to make the team that put it together in their spare time feel all the mere loved.

    Read the article

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Applications: The mathematics of movement, Part 1

    - by TechTwaddle
    Before you continue reading this post, a suggestion; if you haven’t read “Programming Windows Phone 7 Series” by Charles Petzold, go read it. Now. If you find 150+ pages a little too long, at least go through Chapter 5, Principles of Movement, especially the section “A Brief Review of Vectors”. This post is largely inspired from this chapter. At this point I assume you know what vectors are, how they are represented using the pair (x, y), what a unit vector is, and given a vector how you would normalize the vector to get a unit vector. Our task in this post is simple, a marble is drawn at a point on the screen, the user clicks at a random point on the device, say (destX, destY), and our program makes the marble move towards that point and stop when it is reached. The tricky part of this task is the word “towards”, it adds a direction to our problem. Making a marble bounce around the screen is simple, all you have to do is keep incrementing the X and Y co-ordinates by a certain amount and handle the boundary conditions. Here, however, we need to find out exactly how to increment the X and Y values, so that the marble appears to move towards the point where the user clicked. And this is where vectors can be so helpful. The code I’ll show you here is not ideal, we’ll be working with C# on Windows Mobile 6.x, so there is no built-in vector class that I can use, though I could have written one and done all the math inside the class. I think it is trivial to the actual problem that we are trying to solve and can be done pretty easily once you know what’s going on behind the scenes. In other words, this is an excuse for me being lazy. The first approach, uses the function Atan2() to solve the “towards” part of the problem. Atan2() takes a point (x, y) as input, Atan2(y, x), note that y goes first, and then it returns an angle in radians. What angle you ask. Imagine a line from the origin (0, 0), to the point (x, y). The angle which Atan2 returns is the angle the positive X-axis makes with that line, measured clockwise. The figure below makes it clear, wiki has good details about Atan2(), give it a read. The pair (x, y) also denotes a vector. A vector whose magnitude is the length of that line, which is Sqrt(x*x + y*y), and a direction ?, as measured from positive X axis clockwise. If you’ve read that chapter from Charles Petzold’s book, this much should be clear. Now Sine and Cosine of the angle ? are special. Cosine(?) divides x by the vectors length (adjacent by hypotenuse), thus giving us a unit vector along the X direction. And Sine(?) divides y by the vectors length (opposite by hypotenuse), thus giving us a unit vector along the Y direction. Therefore the vector represented by the pair (cos(?), sin(?)), is the unit vector (or normalization) of the vector (x, y). This unit vector has a length of 1 (remember sin2(?) + cos2(?) = 1 ?), and a direction which is the same as vector (x, y). Now if I multiply this unit vector by some amount, then I will always get a point which is a certain distance away from the origin, but, more importantly, the point will always be on that line. For example, if I multiply the unit vector with the length of the line, I get the point (x, y). Thus, all we have to do to move the marble towards our destination point, is to multiply the unit vector by a certain amount each time and draw the marble, and the marble will magically move towards the click point. Now time for some code. The application, uses a timer based frame draw method to draw the marble on the screen. The timer is disabled initially and whenever the user clicks on the screen, the timer is enabled. The callback function for the timer follows the standard Update and Draw cycle. private double totLenToTravelSqrd = 0; private double startPosX = 0, startPosY = 0; private double destX = 0, destY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     destX = e.X;     destY = e.Y;     double x = marble1.x - destX;     double y = marble1.y - destY;     //calculate the total length to be travelled     totLenToTravelSqrd = x * x + y * y;     //store the start position of the marble     startPosX = marble1.x;     startPosY = marble1.y;     timer1.Enabled = true; } private void timer1_Tick(object sender, EventArgs e) {     UpdatePosition();     DrawMarble(); } Form1_MouseUp() method is called when ever the user touches and releases the screen. In this function we save the click point in destX and destY, this is the destination point for the marble and we also enable the timer. We store a few more values which we will use in the UpdatePosition() method to detect when the marble has reached the destination and stop the timer. So we store the start position of the marble and the square of the total length to be travelled. I’ll leave out the term ‘sqrd’ when speaking of lengths from now on. The time out interval of the timer is set to 40ms, thus giving us a frame rate of about ~25fps. In the timer callback, we update the marble position and draw the marble. We know what DrawMarble() does, so here, we’ll only look at how UpdatePosition() is implemented; private void UpdatePosition() {     //the vector (x, y)     double x = destX - marble1.x;     double y = destY - marble1.y;     double incrX=0, incrY=0;     double distanceSqrd=0;     double speed = 6;     //distance between destination and current position, before updating marble position     distanceSqrd = x * x + y * y;     double angle = Math.Atan2(y, x);     //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction     incrX = speed * Math.Cos(angle);     incrY = speed * Math.Sin(angle);     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and current point, after updating marble position     x = destX - marble1.x;     y = destY - marble1.y;     double newDistanceSqrd = x * x + y * y;     //length from start point to current marble position     x = startPosX - (marble1.x);     y = startPosY - (marble1.y);     double lenTraveledSqrd = x * x + y * y;     //check for end conditions     if ((int)lenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because destination reached");         timer1.Enabled = false;     }     else if (Math.Abs((int)distanceSqrd - (int)newDistanceSqrd) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New position");         timer1.Enabled = false;     } } Ok, so in this function, first we subtract the current marble position from the destination point to give us a vector. The first three lines of the function construct this vector (x, y). The vector (x, y) has the same length as the line from (marble1.x, marble1.y) to (destX, destY) and is in the direction pointing from (marble1.x, marble1.y) to (destX, destY). Note that marble1.x and marble1.y denote the center point of the marble. Then we use Atan2() to get the angle which this vector makes with the positive X axis and use Cosine() and Sine() of that angle to get the unit vector along that same direction. We multiply this unit vector with 6, to get the values which the position of the marble should be incremented by. This variable, speed, can be experimented with and determines how fast the marble moves towards the destination. After this, we check for bounds to make sure that the marble stays within the screen limits and finally we check for the end condition and stop the timer. The end condition has two parts to it. The first case is the normal case, where the user clicks well inside the screen. Here, we stop when the total length travelled by the marble is greater than or equal to the total length to be travelled. Simple enough. The second case is when the user clicks on the very corners of the screen. Like I said before, the values marble1.x and marble1.y denote the center point of the marble. When the user clicks on the corner, the marble moves towards the point, and after some time tries to go outside of the screen, this is when the bounds checking comes into play and corrects the marble position so that the marble stays inside the screen. In this case the marble will never travel a distance of totLenToTravelSqrd, because of the correction is its position. So here we detect the end condition when there is not much change in marbles position. I use the value 4 in the second condition above. After experimenting with a few values, 4 seemed to work okay. There is a small thing missing in the code above. In the normal case, case 1, when the update method runs for the last time, marble position over shoots the destination point. This happens because the position is incremented in steps (which are not small enough), so in this case too, we should have corrected the marble position, so that the center point of the marble sits exactly on top of the destination point. I’ll add this later and update the post. This has been a pretty long post already, so I’ll leave you with a video of how this program looks while running. Notice in the video that the marble moves like a bot, without any grace what so ever. And that is because the speed of the marble is fixed at 6. In the next post we will see how to make the marble move a little more elegantly. And also, if Atan2(), Sine() and Cosine() are a little too much to digest, we’ll see how to achieve the same effect without using them, in the next to next post maybe. Ciao!

    Read the article

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • Strategies for managUse of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • Multi-Resolution Mobile Development

    - by user2186302
    I'm about to start development on my first game for mobile phone (I already have a flash prototype completed so it's jsut a matter of "porting" it to mobile and fixing up the code) and plan on hopefully being able to get the game working on iphones and most android devices. I am using Haxe along with OpenFL and HaxeFlixel for development. My question is: What resolution should I design the game in initially and/or what is the best way to develop a game for multiple resolutions. I have found multiple different methods, the best, in my opinion, being strategy 3 on this page: http://wiki.starling-framework.org/manual/multi-resolution_development. However I have some questions about this. First, what would the best base resolution to use be, the guide suggests 240*320 which seems alright to me, although if I chose to use pixel graphics as I most probably will given I'm using HaxeFlixel, I'm not sure if they'll look too blocky on larger screens which I'm not even sure is a problem as it might still look alright. (Honestly, not sure about that and if anyone has any examples of games that use this method and look nice). Finally, please feel free to share whatever methods you use and think is best. For example, HaxeFlixel has a scaling feature that scales the game to fit the exact screen size, but I'm afraid that would lead to blurry and improperly scaled graphics since it would scale by non-integers. But, I'm not sure how noticeable a problem that may or may not be. Although from experience I'm pretty sure it won't look nice and currently I do not think I'm going to go for this option. So, I would really appreciate any help on this subject. Thank you in advance.

    Read the article

  • How to avoid throwing vexing exceptions?

    - by Mike
    Reading Eric Lippert's article on exceptions was definitely an eye opener on how I should approach exceptions, both as the producer and as the consumer. However, I'm still struggling to define a guideline regarding how to avoid throwing vexing exceptions. Specifically: Suppose you have a Save method that can fail because a) Somebody else modified the record before you, or b) The value you're trying to create already exists. These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Or would it be best to return an enum indicating the result, kind of Ok/RecordAlreadyModified/ValueAlreadyExists? With integer.TryParse this problem doesn't exist, since there's only one reason the method can fail. Is the previous example really a vexing situation? Or would throwing an exception in this case be the preferred way? I know that's how it's done in most libraries and frameworks, including the Entity framework. How do you decide when to create a Try version of your method vs. providing some way to test beforehand if the method will work or not? I'm currently following these guidelines: If there is the chance of a race condition, then create a Try version. This prevents the need for the consumer to catch an exogenous exception. For example, in the Save method described before. If the method to test the condition pretty much would do all that the original method does, then create a Try version. For example, integer.TryParse(). In any other case, create a method to test the condition.

    Read the article

  • The Silverlight 4 Training Kit and Green Eggs &amp; Ham

    - by Jim Duffy
    Microsoft has released the Silverlight 4 Training Kit that steps you through the process of constructing Silverlight 4 business applications. “The Silverlight 4 Training Course includes a whitepaper explaining all of the new Silverlight 4 features, several hands-on-labs that explain the features, and a 8 unit course for building business applications with Silverlight 4. The business applications course includes 8 modules with extensive hands on labs as well as 25 accompanying videos that walk you through key aspects of building a business application with Silverlight. Key aspects in this course are working with numerous sandboxed and elevated out of browser features, the new RichTextBox control, implicit styling, webcam, drag and drop, multi touch, validation, authentication, MEF, WCF RIA Services, right mouse click, and much more!” What I think is pretty cool is that there are two ways to access this content, online and offline. Obviously the online version is great when you’re sitting at your desk and you’re connected to the web. What about when you don’t have a connection like when you’re located where you won’t eat green eggs & ham, like on a train or on plane perhaps? :-) You can download the offline version and hope that Sam I Am won’t be to distracting while you try to watch the videos or work your way through the labs. :-) Have a day. :-|

    Read the article

  • links for 2011-03-01

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver - March 23 This live one-day event will bring together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s architects regularly face. (tags: oracle otn architect entarch) Java.net Reborn (Oracle Technology Network Blog (aka TechBlog)) "The migration was huge effort. Over 1400 projects were migrated (and some 30 projects are left to go). A large part of the migration was a big cleanup of abandoned projects...The new java.net site is smaller, faster and now the percentage of good, current content is much higher." (tags: oracle otn java java.net) This Week: OTN Java Developer Day in Boston, Massachusetts (US) | Java.net "This Thursday, March 3, the Oracle Technology Network will be hosting an OTN Developer Day titled You are the future of Java in Boston, Massachusetts (US). The all-day event includes a keynote address ("Java, the Language of the Future") and four separate tracks..." (tags: oracle otn java event) A brief introduction to BRM and architecture (Red Adventure) Yani Miguel offers a primer on the architecture behind Billing and Revenue Management. (tags: oracle otn brm) SOA Suite Integration: Part 1: Building a Web Service (The Shorten Spot) Anthony Shorten's first post in a new series "will not feature SOA Suite at all, but will concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration." (tags: oracle otn SOA soasuite) Darwin-IT: VirtualBox on Windows XP Martien van den Akker shares a few tips. (tags: oracle otn virtualization virtualbox) Pas Apicella: Developing RESTful Web Services from JDeveloper 11g (11.1.1.4) Plas says: "In this example we use JDeveloper to create a basic JAX-RS Web Service from support provided within the code editor as no wizard support exists within JDeveloper 11g at this stage." (tags: oracle otn REST SOA) Alexander Buckley: Maintenance Review of the Java VM Specification The Java Virtual Machine Specification is the authoritative reference for the design of the Java virtual machine that underpins the Java SE platform. In an implementation-independent manner, the Specification describes the architecture, linking model, and instruction set of the Java virtual machine, (tags: oracle otn java virtualization jvm javase)

    Read the article

  • How best to keep bumbling, non-technical managers at bay and still deliver good work?

    - by Curious
    This question may be considered subjective (I got a warning) and be closed, but I will risk it, as I need some good advice/experience on this. I read the following at the 'About' page of Fog Creek Software, the company that Joel Spolsky founded and is CEO of: Back in the year 2000, the founders of Fog Creek, Joel Spolsky and Michael Pryor, were having trouble finding a place to work where programmers had decent working conditions and got an opportunity to do great work, without bumbling, non-technical managers getting in the way. Every high tech company claimed they wanted great programmers, but they wouldn’t put their money where their mouth was. It started with the physical environment (with dozens of cubicles jammed into a noisy, dark room, where the salespeople shouting on the phone make it impossible for developers to concentrate). But it went much deeper than that. Managers, terrified of change, treated any new idea as a bizarre virus to be quarantined. Napoleon-complex junior managers insisted that things be done exactly their way or you’re fired. Corporate Furniture Police writhed in agony when anyone taped up a movie poster in their cubicle. Disorganization was so rampant that even if the ideas were good, it would have been impossible to make a product out of them. Inexperienced managers practiced hit-and-run management, issuing stern orders on exactly how to do things without sticking around to see the farcical results of their fiats. And worst of all, the MBA-types in charge thought that coding was a support function, basically a fancy form of typing. A blunt truth about most of today's big software companies! Unfortunately not every developer is as gutsy (or lucky, may I say?) as Joel Spolsky! So my question is: How best to work with such managers, keep them at bay and still deliver great work?

    Read the article

  • Archbeat Link-O-Rama Top 10 Tweets for October 2013

    - by OTN ArchBeat
    What caught the attention of the 1,988 people who follow @OTNArchBeat last month? The answer is below, in the list of Top 10 Tweets for October 2013 RT @java: Which women in tech inspire you? Blog about them on Ada Lovelace Day! #ALD13 Oct 10, 2013 at 11:14 AM RT @ORCL_Linux: New blog post: Announcing Unbreakable Enterprise Kernel Release 3 for #Oracle #Linux Oct 21, 2013 at 07:11 PM RT @glassfish: Quick & Dirty How-to Guide: Install #GlassFish 4 on #RaspberryPi. Creating an #IoT infra via @MkHeck Oct 27, 2013 at 07:19 PM RT @java: Nighthacking with James Gosling, interview from Hawaii, watch live Oct. 23, 11am PT #java Oct 21, 2013 at 11:26 AM RT @ensode: "Oracle has posted blogs on how to migrate from #Spring to #JavaEE" I wrote the linked article Oct 07, 2013 at 10:53 AM SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa Oct 03, 2013 at 01:17 PM RT @oracleace: Welcome and congrats to new #ACEDs @kevin_mcginley and Rene van Wijk @MiddlewareMagic Oct 25, 2013 at 12:59 PM SOA in Real Life: Mobile Solutions by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa Oct 28, 2013 at 09:23 AM RT @OracleAnalytics: Curious to how big #oow13 was? Here’s an infographic to show you some of the stats. Oct 25, 2013 at 01:13 PM Free Poster: ACM in Practice >> thanks to @dschmeid @hajonormann @torsten_winterberg @tbmaier @gschmutz et al. Oct 16, 2013 at 09:56 AM Thought for the Day "You can converge a toaster and a refrigerator, but those things are probably not going to be pleasing to the user." — Tim Cook, CEO of Apple Inc. (Born November 1, 1960) Source: brainyquote.com

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • Visual Studio 2010 Modeling and Architecture Tools

    - by MikeParks
    Jennifer Marsman (Microsoft Evangelist) and Cameron Skinner (Microsoft Visual Studio Product Unit Manager) recently stopped by our office while they were passing through Louisville on their tour to give us a presentation on the new Visual Studio 2010 Modeling and Architecture Tools. I checked out these new features when Visual Studio 2010 Beta versions originally rolled out and have been really impressed with this stuff ever since then. So it was pretty cool to actually learn some new techniques from Cameron himself since he helped write the actual code behind some of those features. If you've upgraded to Visual Studio 2010 recently I would highly recommend using the Architecture tools. They're awesome. If you want to make improvements to it, they even have their own SDK for it. There are plenty of blogs out there to show you how to use it. I've been waiting to find a tool that works like this where I can really analyze the code in solutions and projects and see how everything ties together. It's really handy if you're asked to work on a new project and aren't familiar with how it works. Just run the tools, analyze the DLL's, learn how everything works, and then you'll be ready to implement new code! It's a great tool to learn new systems quick and easy and it's all housed within the Visual Studio IDE. I just wanted to write a blog to brag about it a little bit, so I figured I'd throw this up here. It's a must have tool for Developers/Architects. Here's some screenshots of when I was using it earlier:   Thanks everyone! - Mike

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

  • Google Image Search Quick Fix

    - by Asian Angel
    Are you tired of unneeded webpage loading and extra link clicking just to access an image found using Google Image Search? Now you can jump directly to the image itself with the clickGOOGLEview extension for Google Chrome. The Problem When you find an image that you like using Google Image Search you always have to go through extra hassle just to get to the image itself. First you have an entire webpage loading in your browser and then you have to click through that irritating “See full size image” link. All that you need is the image, right? Problem Fixed Once you have installed the clickGOOGLEview extension you will absolutely love the result. Find an image that you like, click the link, and there is your new image without any of the hassle or extra link clicking. Big or small having direct access to the image is how it should have been from the beginning. Conclusion The clickGOOGLEview extension does one thing and does it extremely well…it gets you to those images without the extra hassle or additional link clicking. Links Download the clickGOOGLEview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysChange Internet Explorer in Windows Vista to Search Google by DefaultMake Firefox Built-In Search Box Use Google’s Experimental Search KeysQuick Tip: Show PageRank in Firefox while Google Toolbar is HiddenQuick Tip: Use Google Talk Sidebar in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • How do I copy a package from Debian to my PPA?

    - by Bernhard Reiter
    I'd like to add the latest gourmet package from Debian sid to our team's PPA so Ubuntu users who would like to run an up-to-date version of Gourmet can add that PPA to their software sources. (Dependency-wise, that shouldn't be much of an issue as pretty much all our current dependencies are already available in all currently supported Ubuntu versions.) I've downloaded the *.dsc file and debian and orig tarballs, and even figured out I could use this for the package's source.changes file. I also downloaded the Debian maintainer's public key so dput can validate the package. I then tried to upload the package to our PPA using dput ppa:~gourmet/ppa gourmet_0.17.3-1_source.changes (I also tried without the tilda.) This seemed to succeed, but I didn't get a confirmation email, and no packages are now displayed at our PPA, which leads me to believe that the package was rejected because the Debian maintainer's key is obviously not among our team members' keys. So what's the easiest way to "copy" a package from Debian (sid) to a Launchpad PPA? Do I really need to rebuild the entire package locally before I can upload it?

    Read the article

  • AJAX driven "page complete" function? Am I doing it right?

    - by Julian H. Lam
    This one might get me slaughtered, since I'm pretty sure it's bad coding practice, so here goes: I have an AJAX driven site which loads both content and javascript in one go using Mootools' Request.HTML. Since I have initialization scripts that need to be run to finish "setting up" the template, I include those in a function called pageComplete(), on every page Visiting one page to another causes the previous pageComplete() function to no longer apply, since a new one is defined. The javascript function that loads pages dynamically calls pageComplete() blindly when the AJAX call is completed and is loaded onto the page: function loadPage(page, params) { // page is a string, params is a javascript object if (pageRequest && pageRequest.isRunning) pageRequest.cancel(); pageRequest = new Request.HTML({ url: '<?=APPLICATION_LINK?>' + page, evalScripts: true, onSuccess: function(tree, elements, html) { // Empty previous content and insert new content $('content').empty(); $('content').innerHTML = html; pageComplete(); pageRequest = null; } }).send('params='+JSON.encode(params)); } So yes, if pageComplete() is not defined in one the pages, the old pageComplete() is called, which could potentially be disastrous, but as of now, every single page has pageComplete() defined, even if it is empty. Good idea, bad idea?

    Read the article

  • Calculating 3d rotation around random axis

    - by mitim
    This is actually a solved problem, but I want to understand why my original method didn't work (hoping someone with more knowledge can explain). (Keep in mind, I've not very experienced in 3d programming, having only played with the very basic for a little bit...nor do I have a lot of mathematical experience in this area). I wanted to animate a point rotating around another point at a random axis, say a 45 degrees along the y axis (think of an electron around a nucleus). I know how to rotate using the transform matrix along the X, Y and Z axis, but not an arbitrary (45 degree) axis. Eventually after some research I found a suggestion: Rotate the point by -45 degrees around the Z so that it is aligned. Then rotate by some increment along the Y axis, then rotate it back +45 degrees for every frame tick. While this certainly worked, I felt that it seemed to be more work then needed (too many method calls, math, etc) and would probably be pretty slow at runtime with many points to deal with. I thought maybe it was possible to combine all the rotation matrixes involve into 1 rotation matrix and use that as a single operation. Something like: [ cos(-45) -sin(-45) 0] [ sin(-45) cos(-45) 0] rotate by -45 along Z [ 0 0 1] multiply by [ cos(2) 0 -sin(2)] [ 0 1 0 ] rotate by 2 degrees (my increment) along Y [ sin(2) 0 cos(2)] then multiply that result by (in that order) [ cos(45) -sin(45) 0] [ sin(45) cos(45) 0] rotate by 45 along Z [ 0 0 1] I get 1 mess of a matrix of numbers (since I was working with unknowns and 2 angles), but I felt like it should work. It did not and I found a solution on wiki using a different matirx, but that is something else. I'm not sure if maybe I made an error in multiplying, but my question is: this is actually a viable way to solve the problem, to take all the separate transformations, combine them via multiplying, then use that or not?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >