Search Results

Search found 97822 results on 3913 pages for 'static code analysis'.

Page 530/3913 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Google I/O 2010 - Moving beyond markers: Advanced Maps API customization

    Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Geo 301 Jez Fletcher, David Day With such a large number of Google Maps API sites online, it can be hard to make your site stand out from the crowd. This session covers ways in which you can enhance your Maps API application to truly differentiate it, including customizing your overlays, controls, and map. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 36:38 More in Science & Technology

    Read the article

  • Is there an appropriate coding style for implementing an algorithm during an interview?

    - by GlenPeterson
    I failed an interview question in C years ago about converting hex to decimal by not exploiting the ASCII table if (inputDigitByte > 9) hex = inputDigitByte - 'a'. The rise of Unicode has made this question pretty silly, but the point was that the interviewer valued raw execution speed above readability and error handling. They tell you to review algorithms textbooks to prepare for these interviews, yet these same textbooks tend to favor the implementation with the fewest lines of code, even if it has to rely on magic numbers (like "infinity") and a slower, more memory-intensive implementation (like a linked list instead of an array) to do that. I don't know what is right. Coding an algorithm within the space of an interview has at least 3 constraints: time to code, elegance/readability, and efficiency of execution. What trade-offs are appropriate for interview code? How much do you follow the textbook definition of an algorithm? Is it better to eliminate recursion, unroll loops, and use arrays for efficiency? Or is it better to use recursion and special values like "infinity" or Integer.MAX_VALUE to reduce the number of lines of code needed to write the algorithm? Interface: Make a very self-contained, bullet-proof interface, or sloppy and fast? On the one extreme, the array to be sorted might be a public static variable. On the other extreme, it might need to be passed to each method, allowing methods to be called individually from different threads for different purposes. Is it appropriate to use a linked-list data structure for items that are traversed in one direction vs. using arrays and doubling the size when the array is full? Implementing a singly-linked list during the interview is often much faster to code and easier remember for recursive algorithms like MergeSort. Thread safety - just document that it's unsafe, or say so verbally? How much should the interviewee be looking for opportunities for parallel processing? Is bit shifting appropriate? x / 2 or x >> 1 Polymorphism, type safety, and generics? Comments? Variable and method names: qs(a, p, q, r) vs: quickSort(theArray, minIdx, partIdx, maxIdx) How much should you use existing APIs? Obviously you can't use a java.util.HashMap to implement a hash-table, but what about using a java.util.List to accumulate your sorted results? Are there any guiding principals that would answer these and other questions, or is the guiding principal to ask the interviewer? Or maybe this should be the basis of a discussion while writing the code? If an interviewer can't or won't answer one of these questions, are there any tips for coaxing the information out of them?

    Read the article

  • Google I/O 2010: Google TV Keynote, Day 2 - CEO Partner Panel

    Google I/O 2010: Google TV Keynote, Day 2 - CEO Partner Panel Google I/O 2010: Google TV Keynote, Day 2 - CEO Partner Panel Due to licensing and permissions issues, we are unable to show the full Google TV demonstration from the Day 2 keynote at Google I/O. Until we are able to get these permissions, please check out these clips. For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 7 0 ratings Time: 22:43 More in Science & Technology

    Read the article

  • First Minecraft mod not working: make a new sword

    - by yamikoWebs
    I am making my first mod and cannot see what is wrong with it. I am using MCP and Modloader. For my first mod I was going to make swords. I started with making a new EnumToolMaterials WOOD(0, 59, 2.0F, 0, 15), STONE(1, 131, 4.0F, 1, 5), IRON(2, 250, 6.0F, 2, 14), LAPIS(3, 750, 7.0F, 2, 14), OBSIDIAN(3, 1000, 7.5F, 3, 12), EMERALD(3, 1561, 8.0F, 3, 10),//diamond GREEN(3, 2000, 9.0F, 4, 10),//emerald GOLD(0, 200, 12.0F, 0, 22); then here is the mod class public class _Mod_Yamiko extends BaseMod{ /* mod itemts */ public static final Item swordLapis = (new ItemSword(600, EnumToolMaterial.LAPIS)).setItemName("swordLapis"); public static final Item swordObsidian = (new ItemSword(601, EnumToolMaterial.OBSIDIAN)).setItemName("swordObsidian"); public static final Item swordGreen = (new ItemSword(602, EnumToolMaterial.GREEN)).setItemName("swordGreen"); public void load(){ //set images swordLapis.iconIndex = ModLoader.addOverride("/gui/items.png","/gui/swordLapis.png"); ModLoader.addName(swordLapis, "Lapis Sword"); //craft ModLoader.addRecipe(new ItemStack(_Mod_Yamiko.swordLapis, 1), new Object[]{ " X ", " X ", " Y ", 'X', Block.dirt, 'Y', Item.stick }); } public String getVersion(){ return "0.1"; } } Then I made a 16×16 .png image. I am not sure where to save it so I recompiled and reobfuscated, took the mod files and put it in my local Minecraft install, added the image where it be should be. No problems when playing but I cannot make the new sword.

    Read the article

  • Some New .NET Downloads and Resources

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit.  It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft.  The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of.  The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today.  Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1.  You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)?  Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox).   With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target.  Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks.  Other projects can now reference this portable library and be provided assemblies custom built to their environment.  This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight.  You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model.   Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here.  You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/.   This site allows you to easily search for small samples of code related to a particular technology or platform.  If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs.  You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others.  Samples are packaged into the VS .vsix format and include any necessary references/dependencies.  By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done.  Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need.  I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities.  Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results.  First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from.  For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings.  All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • Why you need to learn async in .NET

    - by PSteele
    I had an opportunity to teach a quick class yesterday about what’s new in .NET 4.0.  One of the topics was the TPL (Task Parallel Library) and how it can make async programming easier.  I also stressed that this is the direction Microsoft is going with for C# 5.0 and learning the TPL will greatly benefit their understanding of the new async stuff.  We had a little time left over and I was able to show some code that uses the Async CTP to accomplish some stuff, but it wasn’t a simple demo that you could jump in to and understand so I thought I’d thrown one together and put it in a blog post. The entire solution file with all of the sample projects is located here. A Simple Example Let’s start with a super-simple example (WindowsApplication01 in the solution). I’ve got a form that displays a label and a button.  When the user clicks the button, I want to start displaying the current time for 15 seconds and then stop. What I’d like to write is this: lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); Thread.Sleep(1000); } lblTime.ForeColor = SystemColors.ControlText; (Note that I also changed the label’s color while counting – not quite an ILM-level effect, but it adds something to the demo!) As I’m sure most of my readers are aware, you can’t write WinForms code this way.  WinForms apps, by default, only have one thread running and it’s main job is to process messages from the windows message pump (for a more thorough explanation, see my Visual Studio Magazine article on multithreading in WinForms).  If you put a Thread.Sleep in the middle of that code, your UI will be locked up and unresponsive for those 15 seconds.  Not a good UX and something that needs to be fixed.  Sure, I could throw an “Application.DoEvents()” in there, but that’s hacky. The Windows Timer Then I think, “I can solve that.  I’ll use the Windows Timer to handle the timing in the background and simply notify me when the time has changed”.  Let’s see how I could accomplish this with a Windows timer (WindowsApplication02 in the solution): public partial class Form1 : Form { private readonly Timer clockTimer; private int counter;   public Form1() { InitializeComponent(); clockTimer = new Timer {Interval = 1000}; clockTimer.Tick += UpdateLabel; }   private void UpdateLabel(object sender, EventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); counter++; if (counter == 15) { clockTimer.Enabled = false; lblTime.ForeColor = SystemColors.ControlText; } }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; counter = 0; clockTimer.Start(); } } Holy cow – things got pretty complicated here.  I use the timer to fire off a Tick event every second.  Inside there, I can update the label.  Granted, I can’t use a simple for/loop and have to maintain a global counter for the number of iterations.  And my “end” code (when the loop is finished) is now buried inside the bottom of the Tick event (inside an “if” statement).  I do, however, get a responsive application that doesn’t hang or stop repainting while the 15 seconds are ticking away. But doesn’t .NET have something that makes background processing easier? The BackgroundWorker Next I try .NET’s BackgroundWorker component – it’s specifically designed to do processing in a background thread (leaving the UI thread free to process the windows message pump) and allows updates to be performed on the main UI thread (WindowsApplication03 in the solution): public partial class Form1 : Form { private readonly BackgroundWorker worker;   public Form1() { InitializeComponent(); worker = new BackgroundWorker {WorkerReportsProgress = true}; worker.DoWork += StartUpdating; worker.ProgressChanged += UpdateLabel; worker.RunWorkerCompleted += ResetLabelColor; }   private void StartUpdating(object sender, DoWorkEventArgs e) { var workerObject = (BackgroundWorker) sender; for (int x = 0; x < 15; x++) { workerObject.ReportProgress(0); Thread.Sleep(1000); } }   private void UpdateLabel(object sender, ProgressChangedEventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); }   private void ResetLabelColor(object sender, RunWorkerCompletedEventArgs e) { lblTime.ForeColor = SystemColors.ControlText; }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; worker.RunWorkerAsync(); } } Well, this got a little better (I think).  At least I now have my simple for/next loop back.  Unfortunately, I’m still dealing with event handlers spread throughout my code to co-ordinate all of this stuff in the right order. Time to look into the future. The async way Using the Async CTP, I can go back to much simpler code (WindowsApplication04 in the solution): private async void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); await TaskEx.Delay(1000); } lblTime.ForeColor = SystemColors.ControlText; } This code will run just like the Timer or BackgroundWorker versions – fully responsive during the updates – yet is way easier to implement.  In fact, it’s almost a line-for-line copy of the original version of this code.  All of the async plumbing is handled by the compiler and the framework.  My code goes back to representing the “what” of what I want to do, not the “how”. I urge you to download the Async CTP.  All you need is .NET 4.0 and Visual Studio 2010 sp1 – no need to set up a virtual machine with the VS2011 beta (unless, of course, you want to dive right in to the C# 5.0 stuff!).  Starting playing around with this today and see how much easier it will be in the future to write async-enabled applications.

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • MVVM Properties with Resharper

    - by George Evjen
    Read this early this morning and it is simple since we have all probably put together a code snippet. With the projects that we do at ArchitectNow we write alot of new custom views and view models, which results in having to write repetitive property code. We changed the context of the code a bit to suit our infrastructure but the idea is to have these properties created quickly. thanks to sparky dasrath for reminding us how easy this is to do sdasrath.blogspot.com/2011/02/20110221-resharper-c-snippet-for-mvvm.html

    Read the article

  • Google I/O 2010 - Chrome Extensions - how to

    Google I/O 2010 - Chrome Extensions - how to Google I/O 2010 - Chrome Extensions - how to Chrome 101 Brian Kennish Google Chrome shipped an extensions API in version 4.0. Since last year, new capabilites have been added to the extensions framework, and many people have already written powerful extensions with minimal effort. Find out how to write an extension, and what's coming next in Chrome Extensions. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 4 0 ratings Time: 59:35 More in Science & Technology

    Read the article

  • Google I/O 2010: Google TV Keynote - Under The Hood

    Google I/O 2010: Google TV Keynote - Under The Hood Due to licensing and permissions issues, we are unable to show the full Google TV demonstration from the Day 2 keynote at Google I/O. Until we are able to get these permissions, please check out these clips. For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 3 0 ratings Time: 02:02 More in Science & Technology

    Read the article

  • Calculating collision force with AfterCollision/NormalImpulse is unreliable when IgnoreCCD = false?

    - by Michael
    I'm using Farseer Physics Engine 3.3.1 in a very simple XNA 4 test game. (Note: I'm also tagging this Box2D, because Farseer is a direct port of Box2D and I will happily accept Box2D answers that solve this problem.) In this game, I'm creating two bodies. The first body is created using BodyFactory.CreateCircle and BodyType.Dynamic. This body can be moved around using the keyboard (which sets Body.LinearVelocity). The second body is created using BodyFactory.CreateRectangle and BodyType.Static. This body is static and never moves. Then I'm using this code to calculate the force of collision when the two bodies collide: staticBody.FixtureList[0].AfterCollision += new AfterCollisionEventHandler(AfterCollision); protected void AfterCollision(Fixture fixtureA, Fixture fixtureB, Contact contact) { float maxImpulse = 0f; for (int i = 0; i < contact.Manifold.PointCount; i++) maxImpulse = Math.Max(maxImpulse, contact.Manifold.Points[i].NormalImpulse); // maxImpulse should contain the force of the collision } This code works great if both of these bodies are set to IgnoreCCD=true. I can calculate the force of collision between them 100% reliably. Perfect. But here's the problem: If I set the bodies to IgnoreCCD=false, that code becomes wildly unpredictable. AfterCollision is called reliably, but for some reason the NormalImpulse is 0 about 75% of the time, so only about one in four collisions is registered. Worse still, the NormalImpulse seems to be zero for completely random reasons. The dynamic body can collide with the static body 10 times in a row in virtually exactly the same way, and only 2 or 3 of the hits will register with a NormalImpulse greater than zero. Setting IgnoreCCD=true on both bodies instantly solves the problem, but then I lose continuous physics detection. Why is this happening and how can I fix it? Here's a link to a simple XNA 4 solution that demonstrates this problem in action: http://www.mediafire.com/?a1w242q9sna54j4

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • Google I/O 2010: Google TV Keynote - Push Android Apps From Web To TV

    Google I/O 2010: Google TV Keynote - Push Android Apps From Web To TV Due to licensing and permissions issues, we are unable to show the full Google TV demonstration from the Day 2 keynote at Google I/O. Until we are able to get these permissions, please check out these clips. For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 1 0 ratings Time: 02:09 More in Science & Technology

    Read the article

  • Why does glGetString returns a NULL string

    - by snape
    I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code #include <GL/glew.h> #include <GL/glfw.h> #include <cstdio> #include <cstdlib> #include <string> using namespace std; void shutDown(int returnCode) { printf("There was an error in running the code with error %d\n",returnCode); GLenum res = glGetError(); const GLubyte *errString = gluErrorString(res); printf("Error is %s\n", errString); glfwTerminate(); exit(returnCode); } int main() { // start GL context and O/S window using GLFW helper library if (glfwInit() != GL_TRUE) shutDown(1); if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW_WINDOW) != GL_TRUE) shutDown(2); // start GLEW extension handler glewInit(); // get version info const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string const GLubyte* version = glGetString (GL_VERSION); // version as a string printf("Renderer: %s\n", renderer); printf("OpenGL version supported %s\n", version); // close GL context and any other GLFW resources glfwTerminate(); return 0; } I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error

    Read the article

  • Google Python Class Day 1 Part 2

    Google Python Class Day 1 Part 2 Google Python Class Day 1 Part 2: Lists, Sorting, and Tuples. By Nick Parlante. Support materials and exercises: code.google.com From: GoogleDevelopers Views: 13 0 ratings Time: 35:12 More in Science & Technology

    Read the article

  • Use Oracle Product Hub Business Events to Integrate Additional Logic into Your Business Flows

    - by ToddAC-Oracle
    Business events provide a mechanism to plug-in and integrate some additional business processes or custom code into standard business flows.  You could send a notification to a business User, write to advanced queues or perform some custom processes. In-built business events are available specifically for each flow like Item Creation, Item Updation, User-Defined Attribute Changes, Change Order Creation, Change Order Status Changes and others.To get a list of business events, refer to the PIM implementation Guide or Using Business Events in PLM and PIM Data Librarian (Doc ID 372814.1) .If you are planning to use business events, Doc ID 1074754.1 walks you through a setup with examples. How to Subscribe and Use Product Hub (PIM / APC) Business Events [Video] ? (Doc ID 1074754.1). Review the 'Presentation' section of Doc ID 1074754.1 for complete information and best practices to follow while implementing code for subscriptions. Learn things you might want to avoid, like commit statements for instance. Doc ID 1074754.1 also provides sample code for testing, and can be used to troubleshoot missing setups or frequently experienced issues. Take advantage and run a test ahead of time with the sample code to isolate any issues from within business specific subscription code.Get more out of Oracle Product Hub by using Business Events!

    Read the article

  • Google I/O 2010 - Google Analytics APIs: End to end

    Google I/O 2010 - Google Analytics APIs: End to end Google I/O 2010 - Google Analytics APIs: End to end Google APIs 201 Nick Mihailovski Google Analytics measures performance of your website. Learn advanced techniques on how to use our tracking, processing and data export APIs as we walk you through an example of creating a most visited pages web element for your website. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 55:42 More in Science & Technology

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • Are there web frameworks/tools that optimize for speed of development?

    - by Ahmet Yildirim
    I've been a PHP web developer for about 2 and a half years now. I have started using CodeIgniter framework to shorten development process a while ago. I developed 4 websites using CodeIgniter. It has been really tiring and boring due to code-repetition. Code repetition was vast on form handling functions in controllers.So in my last project , i developed a general form input handling function.This lead a realisation that it could get even faster by more automation. What i think i lack in my development is using CRUD & Code Generation tools. But i am wondering if there is any other utilities that shortens development process. Which web development language or framework more inclined towards code generation utilities?

    Read the article

  • how to save a gtktextbuffer content in file

    - by user1565593
    i tried to save sengtktextbuffer content in a file. my code seens working but i have a problem in file. some characters are unreadable in outfile outfile my code: def on_save_clicked(self, widget, data=None): start = self.textbuffer.get_start_iter() end = self.textbuffer.get_end_iter() this = self.textbuffer.get_text(start, end, False) format = self.textbuffer.register_serialize_tagset(this) data = self.textbuffer.serialize(self.textbuffer, format, start, end) outfile = open("/home/christophe/toto.txt", "w") outfile.write(data) outfile.close() what is wrong in my code? thanks for your help

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • Google I/O 2010 - WebM Open Video Playback in HTML5

    Google I/O 2010 - WebM Open Video Playback in HTML5 Google I/O 2010 - WebM Open Video Playback in HTML5 Chrome 101 Kevin Carle, Jim Bankoski, David Mendels (Brightcove), Bob Mason (Brightcove) The new open VP8 codec and WebM file format present exciting opportunities for innovation in HTML5 video. In this session, you'll see WebM playback in action while YouTube and Brightcove engineers show you how to support the format in your own HTML5 site. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 4 0 ratings Time: 40:02 More in Science & Technology

    Read the article

  • Org-mode lags in highlighting source

    - by quanticle
    I'm using org-mode to maintain my programming notes. This means I have lots of source code blocks, as follows. #+begin_src <language name> <code> #+end_src One thing I've noticed is that when I write the #+end_src, emacs doesn't color the source code as such. Yet, if I quit emacs and reopen the notes file (or force a refresh with the Org-Refresh/Reload-Refresh setup current buffer menu entry) the source is colored grey if I'm using the GUI or green if I'm using emacs in the terminal. Is this an inherent limitation of emacs, or am I doing something wrong in setting up my code blocks that's preventing emacs from going back and recoloring the source code that I've entered?

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >