Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 69/196 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • Free Team Foundation Service a Boon for Play Projects

    - by Ken Cox [MVP]
    My ‘jump in and sink or swim’ learning style leads me to create dozens of unbillable ‘play’ projects in Visual Studio 2012.  For example, I’m currently learning to customize the powerful auction/fixed price/classifieds software called AuctionWorx Enterprise.  I make stupid mistakes as I grasp a new API and configure projects. It’s frustrating to go down a rat hole and discover the VS IDE’s Undo doesn’t reach back to a working build from the previous weekend. Enter Visual Studio’s Free Team Foundation Service.  Put your play projects into the free source control and check in (or shelve) a snapshot before embarking on something risky. (I already use Discount ASP.NET’s Team Foundation Server hosting for client projects so there’s zero TFS/VS learning curve for me and first class GUI support.) TFS is free right now for teams of five or fewer. I’m not naive enough to expect ‘free’ to last forever. So, it’ll be interesting to see how much Microsoft intends to charge in 2013 for TFS. In the meantime, why not grab some free source code storage? BTW, it’s weird to realize that Microsoft is backing up my junky little playtime code on three physically-distinct servers every day - and taking incremental backups every hour.

    Read the article

  • SSIS Dashboard v0.4

    - by Davide Mauri
    Following the post on SSISDB script on Gist, I’ve been working on a HTML5 SSIS Dashboard, in order to have a nice looking, user friendly and, most of all, useful, SSIS Dashboard. Since this is a “spare-time” project, I’ve decided to develop it using Python since it’s THE data language (R aside), it’s a beautiful & powerful, well established and well documented and with a rich ecosystem around. Plus it has full support in Visual Studio, through the amazing Python Tools For Visual Studio plugin, I decided also to use Flask, a very good micro-framework to create websites, and use the SB Admin 2.0 Bootstrap admin template, since I’m all but a Web Designer. The result is here: https://github.com/yorek/ssis-dashboard and I can say I’m pretty satisfied with the work done so far (I’ve worked on it for probably less than 24 hours). Though there’s some features I’d like to add in t future (historical execution time, some charts, connection with AzureML to do prediction on expected execution times) it’s already usable. Of course I’ve tested it only on my development machine, so check twice before putting it in production but, give the fact that, virtually, there is not installation needed (you only need to install Python), and that all queries are based on standard SSISDB objects, I expect no big problems. If you want to test, contribute and/or give feedback please fell free to do it…I would really love to see this little project become a community project! Enjoy!

    Read the article

  • Privacy Protection in Oracle IRM 11g

    - by martin.abrahams
    Another innovation in Oracle IRM 11g is an in-built privacy policy challenge. By design, one of the many things that Oracle IRM does, of course, is collect audit information about how and where sealed documents are being used - user names, machine identifiers and so on. Many customers consider that this has privacy implications that the user should be invited to accept as a condition of service use - for the protection of both of the user and the service from avoidable controversy. So, in 11g IRM, when a new user connects to a server for the first time, they can expect to see the following privacy policy dialog. The dialog provides a configurable URL that the customer can use to publish the privacy policy for their IRM service. The policy might clarify what data is being collected and stored, what use that data might be put to, and so on as required by the service owner's legal advisers. In previous releases, you could construct an equivalent capability, and some customers did, but this innovation makes it much easier to do - you simply write a privacy policy and publish it as a web page for which the dialog automatically provides a link. This is another example of how Oracle IRM anticipates not just the security requirements of a customer, but also the broader requirements of service provisioning.

    Read the article

  • Coming Soon: Development and Extensibility Handbook from Oracle Press

    - by Oliver Steinmeier
    I had hoped to get my hands on a copy at OpenWorld, but it wasn't available yet from the printers.  But it's coming soon: The Oracle Fusion Applications Development and Extensibility Handbook. This book is promising to be a great resource for anyone interested in learning about our favorite topic.  And while I haven't read it yet, a look at the cover page image tells me that it's going to be a high-quality book.  That's because I have known one of the authors, Dhaval Mehta, for many years.  He recently left Oracle development for new challenges, but until then he was widely known as one of the most knowledgable Fusion Applications engineers.  And his co-authors have equally strong and complementary backgrounds.Here's what the book covers: Explore Oracle Fusion Applications components and architecture Plan, develop, debug, and deploy customizations Extend out-of-the-box functionality with Oracle JDeveloper Modify web applications using Oracle Composer Incorporate Oracle SOA Suite 11g composites Validate code through sandboxes and test environments Secure data using authorization, authentication, and encryption Design and distribute personalized BI reports Automate jobs with Oracle Enterprise Scheduler Change appearance and branding of your applications with the Oracle ADF Skin Editor   Expect a more detailed review of the book when it his your local bookseller's shelves (or Amazon).

    Read the article

  • Are you a GPGPU developer? Participate in our UX study

    - by Daniel Moth
    You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together. If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference. We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this") The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs. Are you interested or know someone who might be a good fit? Let us know at this address: [email protected]. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products. Comments about this post welcome at the original blog.

    Read the article

  • [MINI HOW-TO] Repair Missing External Hard Drive Database Error in WHS

    - by Mysticgeek
    If you’re using external hard drives with your Windows Home Server, they might get unplugged and create an error. Here we look at running the Repair Wizard to quickly fix the issue. If an external drive that is included in your drive pool becomes unplugged or loses power, you might see the following error under Home Network Health when opening the WHS Console. To fix the issue verify the drive has power and is plugged in correctly and click Repair. The wizard launches and you’ll need to agree that you may lose data during the repair of the backup database. In this example it was a simple problem where an external drive became unplugged from the server…so you can close out of the wizard. If you look under Server Storage you can see the drive is missing…To fix the issue verify the drive has power and is plugged in correctly. WHS will add the drive back into the pool and when finished you’ll see it listed as healthy and good to go. Using external drives that are part of your storage pool may not be the best way to have your home built WHS setup, but if you do, expect occasional errors such as this. Similar Articles Productive Geek Tips Find Your Missing USB Drive on Windows XPFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaSpeed up External USB Hard Drives in Windows VistaRebit Backup Software [Review]Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • Why do programmers write applications and then make them free?

    - by Ken
    As an entrepreneur/programmer who makes a good living from writing and selling software, I'm dumbfounded as to why developers write applications and then put them up on the Internet for free. You've found yourself in one of the most lucrative fields in the world. A business with 99% profit margin, where you have no physical product but can name your price; a business where you can ship a buggy product and the customer will still buy it. Occasionally some of our software will get a free competitor, and I think, this guy is crazy. He could be making a good living off of this but instead chose to make it free. Do you not like giant piles of money? Are you not confident that people would pay for it? Are you afraid of having to support it? It's bad for the business of programming because now customers expect to be able to find a free solution to every problem. (I see tweets like "is there any good FREE software for XYZ? or do I need to pay $20 for that".) It's also bad for customers because the free solutions eventually break (because of a new OS or what have you) and since it's free, the developer has no reason to fix it. Customers end up with free but stale software that no longer works and never gets updated. Customer cries. Developer still working day job cries in their cubical. What gives? PS: I'm not looking to start an open-source/software should be free kind of debate. I'm talking about when developers make a closed source application and make it free.

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • cannot get upstart to run user job

    - by dre
    I am trying to get upstart start a user job during the boot of my machine. I have my conky.conf upstart config file in my $HOME/.init directory. Wenn I run "start conky" I get this error: dre@dre-laptop:~$ start conky start: Rejected send message, 1 matched rules; type="method_call", sender=":1.76" (uid=1000 pid=2843 comm="start conky ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") dre@dre-laptop:~$ I (think I) know that this error has to do with the d-bus system (authentification). I also read (http://upstart.ubuntu.com/cookbook/#id96) that ubuntu 12.10 already has the right configuration in the d-bus config file "/etc/dbus-1/system.d/Upstart.conf" to allow normal users to use upstart. dre@dre-laptop:~/.init$ cat conky.conf description "conky, a system monitor appled" start on lightdm stop on shutdown # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background #expect fork # Start conky exec /usr/bin/conky So, who knows, what do I do wrong??? greetings Andre No one??? Please... give me your best shot.

    Read the article

  • Rhino Mocks, AssertWasCalled with Arg Constraint on array parameter

    - by Etienne Giust
    Today, I had a hard time unit testing a function to make sure a Method with some array parameters was called. Method to be called : void AddUsersToRoles(string[] usernames, string[] roleNames);   I had previously used Arg<T>.Matches on complex types in other unit tests, but for some reason I was unable to find out how to apply the same logic with an array of strings.   It is actually quite simple to do, T really is a string[], so we use Arg<string[]>. As for the Matching part, a ToList() allows us to leverage the lambda expression.   sut.PermissionServices.AssertWasCalled(                 l => l.AddUsersToRoles(                     Arg<string[]>.Matches(a => a.ToList().First() == UserId.ToString())                     ,Arg<string[]>.Matches(a => a.ToList().First() == expectedRole1 && a.ToList()[1] == expectedRole2)                     )                     );   Of course, iw we expect an array with 2 or more values, the math would be something like : a => a.ToList()[0] == value1 && a.ToList()[1] == value2    … etc.

    Read the article

  • Questions for Oracle GlassFish and Middleware Executives

    - by arungupta
    GlassFish Community Event is planned, as part of JavaOne, on Sep 30, 2012. If you are involved in the GlassFish community, this is a perfect opportunity to engage with the Oracle GlassFish Team. Agenda 11:00 - 11:05: Introduction 11:05 - 11:30: Roadmap and Community Updates 11:30 - 12:15: Q&A with Executive Speaker Panel from Oracle and the GlassFish Team 12:15 - 01:00: Customer Testimonials Location: Moscone West, Room 2005 One of the highlights of the event is a speaker panel with executives from Oracle GlassFish and Middleware. This will be your chance to ask tough questions and expect a honest and frank answer from them. If you are attending JavaOne, then you can register for the Community Event and ask the questions in person. However, if are you are not attending the conference then we would still like give you an option to ask your questions. Are you excited, nervous, curious, confused, thrilled about the future of Java EE, GlassFish, and in general about middleware at Oracle ? This is your chance to leave a comment on this blog with your question. We'll pick some of the questions and ask them for you. And then post a response after the conference. Have you registered for JavaOne ?

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Porting Ruby/NCruses Rogue-Like to .NET and FlatRedBall

    - by ashes999
    I created an awesome rogue-like game in Ruby. For the GUI, I used NCurses. Since I'm using FlatRedBall as my engine of choice for Silverlight game development, I want to port this game over. What is the best way to efficiently doing this, and what are the pitfalls I should expect? For example, Ruby is object-oriented, like C#, and I should be able to just convert (rewrite) classes one by one. However, I will run into issues like: NCurses API. I need to possibly create my own notions of a "Window", or else rewrite GUI code. It's one class, but it's BIG. Mix-Ins. These are essentially aspect-oriented development. There are a couple of solutions in .NET, like dynamic classes. What else? Also, I should mention that I want to create a C# application out of this. As much as possible, I'll dump reusable and helper code, algorithms, etc. into separate projects and generate reusable DLLs.

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • String patterns that can be used to filter and group files

    - by Louis Rhys
    One of our application filters files in certain directory, extract some data from it and export a document from the extracted data. The algorithm for extracting the data depends on the file, and so far we use regex to select the algorithm to be used, for example .*\.txt will be processed by algorithm A, foo[0-5]\.xml will be processed by algo B, etc. However now we need some files to be processed together. For example, in one case we need two files, foo.*\.xml and bar.*\.xml. Part of the information to be extracted exist in the foo file, and the other part in the bar file. Moreover, we need to make sure the wild card is compatible. For example, if there are 6 files foo1.xml foo23.xml bar1.xml bar9.xml bar23.xml foo4.xml I would expect foo1 and bar1 to be identified as a group, and foo23 and bar23 as another group. bar9 and foo4 has no pair, so they will not be treated. Now, since the filter is configured by user, we need to have a pattern that can express the above requirement. I don't think you can express meaning like above in standard regex. (foo|bar).*\.xml will match all 6 file above and we can't identify which file is paired for a particular file. Is there any standard pattern that can express it? Or any idea how to modify regex to support this, that can be implemented easily?

    Read the article

  • Looking for 2D Cross platform suggestions based on requirements specified

    - by MannyG
    I am an intermediate developer with minor experience on enterprise mobile applications for iphone, android and blackberry looking to build my first ever mobile game. I did a google search for some game dev forums and this popped up so I thought I would try posting here as I lack luck elsewhere. If you have ever heard of the game for the iphone and android platform entitled avatar fight then you will have an idea of the graphic capabilities I require. Basically the battles which are automated one sprite attacking another doing cool animations but all in 2d. My buddy and I have two motivations, one is to jump into mobile Dev as my experience is limited as is his so we would like some trending knowledge (html5 would be nice to learn) . The other is to make some money on the side, don't expect much but polishing the game and putting our all will hopefully reward us a bit. We have looked into corona engine, however a lot of people are saying it is limited in the graphics department, we are open to learning new languages like lua, c++, python etc. Others we have looked at include phonegap, rhomobile, unity, and the list goes on. I really have no idea what the pros and cons of these are but for a basic battle sequence and some mini games we want to chose the right one. Some more things that we will be doing include things like card games, side scrolling flying object based games, maybe fishing stuff. We want to start small with these minigames and work our way up to the idea we would like to implement in the future. We only want to work in 2D. So with these requirements please help me chose a platform to work on (cross platform is what we are ideally leaning towards). Please feel free to throw in some pieces of advice you may have for newbie game developers like myself too. Thank you for reading!

    Read the article

  • Is it relatively safe to install kernel from "Canonical Kernel Team ppa" than "Mainline"

    - by tijybba
    I already referred most of the questions stating Upgrade from Mainline Builds or Compiling from latest source or PPA and also concluded that it can cause breakage to Current stable installed system. My question is regarding the kernel builds from Canonical Kernel Team which i have subscribed in Ubuntu 12.04 64-bit , states This is the core kernel team as hired by Canonical. Do not use this team, use ubuntu-kernel-team instead. My stable kernel states 3.2.0-27.42 from Ubuntu repository , also i consider Canonical Kernel Team to be Official ,currently urging me to Upgrade 3.2.0-27.43 , so from the Odd numbered and through the PPA description it is categorized as Unstable. From this ,it can be said next stable release would be 3.2.0-27.44. Is upgrading to .43 version is stable enough to continue , since .44 will be provide by Ubuntu itself based on .43 version and so on. Though i can't expect a lot of Changes ,but does it provide new Improvements or just Bug Fixes since it is just a preceding Release. Also , apart from Ubuntu mainline kernel , is Canonical Kernel Team different. If so , in what development or contribution terms. Is the Ubuntu kernel developed by Two different teams or same team. P.S.: Just noticed that sudo apt-get update && sudo apt-get upgrade provides me upgrade to .43 kernel , which normally requires sudo apt-get dist-upgrade to upgrade to newer kernel available , unless it normally provides message like " Following packages were not upgraded..." , is it an error or an exception to this Canonical Kernel PPA.

    Read the article

  • How do I get OBDII software working?

    - by NoBugs
    I have an ODBII USB cable for vehicle diagnostics, unfortunately I haven't been able to get it working on Ubuntu 12.04. The closest I've come is using the VAG-COM software with wine, using the ln -s /dev/ttyUSB0 ~/.wine/dosdevices/com1 trick and running stty -F /dev/ttyUSB0 speed 9600 repeatedly. It will connect and show the vehicle is OBDII, but none of the useful features seem to be working. I tried: Scantool - says it's connecting to the /dev/device in terminal, but doesn't. obdgpslogger - times out all the time. pyobd - This seems to be the most up-to-date source I could find, I had to adjust the code a bit to work (see here for changes). Still, in obd_io.py interpret_result function, it says it's looking for 4 space separated numbers, where the usb-serial is receiving bogus code "0100" instead? The device shows up in lsusb as: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC Is the problem that these native tools don't expect a USB serial, or a serial of this type? Or are these apps too old to recognize OBD2 of this vehicle?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >