Search Results

Search found 8219 results on 329 pages for 'less'.

Page 136/329 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Why does my name resolution hit the DNS even with a hosts file entry?

    - by Volomike
    I'm running Ubuntu 10.04.2 LTS Desktop. Being a web developer, naturally I created a "me.com" in my /etc/hosts file. Unfortunately, my name resolution is going out to the DNS before first checking my local hosts entry and I can't figure out why. The end result is that if my /etc/resolv.conf contains "nameserver 127.0.0.1" in there first, then I get a response back in my web browser from me.com (local) within less than a second. But if I don't have that entry, then my response takes sometimes as much as 5 seconds if my ISP is a little slow. The problem was so troublesome that I actually had to file a question here (and someone resolved it) for how to automatically insert that entry into /etc/resolv.conf. But one of the users (@shellaholic) here highly recommended (and commented back and forth with me about it) that I should file this question. Do you know why my workstation's name resolution has to hit the DNS server first before hitting my /etc/hosts file entry? For now, I'm using the resolv.conf trick (see link above).

    Read the article

  • My New Job

    - by Stuart Brierley
    Last year I started a new job with a logistics company in the North of England, where I was responsible for the management, design and development of IT Integration strategies, architectures and solutions using BizTalk Server 2009.  This included the design and implementation of the BizTalk Server 2009 infrastructure, the definition of development standards, mentoring a fellow developer in the ways of BizTalk and migrating a number of existing solutions from Softshare over to BizTalk 2009. Unfortunately I then realised that, following this initial set up, there didn't actually seem to be that much BizTalk work for me to get stuck into and reluctantly I have now moved on from this role to a very similar role with the country's largest office supplies company.  Based in Sheffield, we distribute office supplies on a UK wide basis and computer supplies across Europe. The situation here is slightly different than when I first joined my previous employer.  Whereas that was a green field installation with no previous BizTalk solutions in place, my new employer currently has a number of live BizTalk 2000 (!) and BizTalk 2006 solutions in place.  Unfortunately the infrastructure around these is less than ideal; with no clear distinction between development and test environments and no source control what so ever! We are currently building a proposal for a new BizTalk Server 2010 implementation, where I am hopeful of being able to implement fully independent development, test and pseudo-live environments, alongside an enterprise level live installation.  We should also be introducing Team Foundation Server to the development process, thereby giving us some much needed source control capabilities. Following this is likely to be a period of migration for the existing BizTalk Solutions, along with the onward development of new projects and initiatives - I'm hoping to be a busy man for the forseeable future :o)

    Read the article

  • Google Rolls Out a New and Compact Navigation Bar

    - by Jason Fitzpatrick
    Earlier this spring Google introduced the black navigation bar; now they’ve updated the bar to take up less space and be more useful. Although the black bar is useful in-so-far as it gives you quick access to Google services (useful, of course, only if you use those services) the new navigation bar–seen in the video above–includes an improved layout. Rather than use the bar space to spread out links which the user may or may not use the service links are now tucked into a mouse-over menu accessed by hovering on the Google logo. The majority of the space previously just taken up by links and the black bar itself is now a a search box. If you don’t already see the new interface, look for it to appear in your Google account within the next few days. Hit up the link below to read the official announcement. The Next Stage In Our Redesign [The Official Google Blog] HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • How to diagnose and fix Kernel Panic Fatal Machine Check error?

    - by 0x4a6f4672
    I have got a new Samsung Series 7 laptop with dual boot setup for Windows 8 and Ubuntu 12.10. A fine machine comparable to a Macbook Pro. The Ubuntu installation was quite a hassle, but with the help of Boot Repair finally it seemed to work. Or so I thought. Windows 8 starts fine, but if I want to start Ubuntu regularly the following Machine Check Exception error occurs, quite similar to this one [Hardware Error] CPU 1: Machine Check Exception: 5 Bank 6 [Hardware Error] RIP !inexact! 33 <00007fab2074598a> [Hardware Error] TSC 95b623464c ADDR fe400 MISC 3880000086 .. [similar messages for CPU 2,3 and 0] .. [Hardware Error] Machine Check: Processor context corrupt Kernel panic - not syncing: Fatal Machine Check Rebooting in 30 seconds Kernel panic does not sound good. Then it starts to reboot, and the second boot trial often works. Is it a Kernel or driver problem? The laptop has an Intel Core i7 processor. I already deactivated Hyperthreading in the BIOS, but it does not seem to help :-( I also disabled the Execute Disable Bit (EDB) flag in the BIOS. EDB is an Intel hardware-based security feature that can help reduce system exposure to viruses and malicious code. Since I disabled it, the error did occur less frequently, but it still appears occasionally :-( It seems to be the same error as described here and here. Maybe a Samsung specific Kernel problem? A similar error also happens on a Samsung Ultrabook Series 9 (which seems to be kernel bugs 49161 and 47121). At my Samsung Series 7, it still occurs for instance during booting on battery after "Checking battery state". Perhaps anyone else has an idea? These Kernel Panic errors are reallly annoying..

    Read the article

  • Need help understanding XNA 4.0 BoundingBox vs BoundingSphere Intersection

    - by nerdherd
    I am new to both game programming and XNA, so I apologize if I'm missing a simple concept or something. I have created a simple 3D game with a player and a crate and I'm working on getting my collision detection working properly. Right now I am using a BoundingSphere for my player, and a BoundingBox for the crate. For some reason, XNA only detects a collision when my player's sphere touches the front face of the crate. I'm rendering all the BoundingSpheres and BoundingBoxes as wire frames so I can see what's going on, and everything visually appears to be correct, but I can't figure out this behavior. I have tried these checks: playerSphere.Intersects(crate.getBoundingBox()) playerSphere.Contains(crate.getBoundingBox(), ContainmentType.Intersects) playerSphere.Contains(crate.getBoundingBox()) != ContainmentType.Disjoint But they all seem to produce the same behavior (in other words, they are only true when I hit the front face of the crate). The interesting thing is that when I use a BoundingSphere for my crate the collision is detected as I would expect, but of course this makes the edges less accurate. Any thoughts or ideas? Have I missed something about how BoundingSpheres and BoundingBoxes compute their intersections? I'd be happy to post more code or screenshots to clarify if needed. Thanks!

    Read the article

  • .NET WPF Charting Control

    - by Randy Minder
    We're very close to wrapping up a WPF dashboarding application using SSRS (.RDLC files) and the Microsoft Report Viewer. For a number of reasons, this combination has turned out to be less than what we had hoped. One of the biggest problems is that the Microsoft Report Viewer is not a WPF control. We've had other problems as well. Our app consists of at least 5 tabs and each tab has at least 4-5 charts on it. All the charts update on their own timed schedules (like every 15-30 minutes). For the next version I'd like to explore other .NET charting tools for WPF. Performance is absolutely critical. As is resource usage. The tool must support WPF and as many chart types as possible. Can anyone recommend (or not recommend) charting tools they have experience with? We own Telerik and I've dabbled with their charting control. At the 30K foot level, it seems quite nice.

    Read the article

  • Searching Your PL/SQL Source with Oracle SQL Developer

    - by thatjeffsmith
    Version 3.2.1 included a few tweaks along with several hundred bug fixes. One of those tweaks was the addition of ‘ALL_SOURCE’ as a selection for the Type drop down in the Find Database Object panel. Scroll ALL the way down to the bottom Searching the database for your code or objects can be expensive. The ALL_SOURCE view comes in pretty handy when I want to demo how to cancel long running queries or the Task Progress panel – did you know you can manage all of your long running queries here? Yeah, don’t run this I pretty much hosed our demo pod at Open World b/c I ran that same query but added an ORDER BY b.TEXT DESC to the query and blew up the TEMP space and filled the primary partition on the image. Fun stuff. Anyways, where was I going with this? Oh yeah, searching ALL_SOURCE can be expensive. So we took it out of the product for awhile. And now it’s back in. If you select the ‘ALL’ field, it doesn’t actually search EVERYTHING, because that would probably be less than helpful. So if you want to search your PL/SQL objects for a scrap or bit of code, use the ‘ALL_SOURCE’ option in v3.2.1 Double-Click on the search results to go to the code you’re looking for. Be careful what you search for. Just like any query, it could take awhile.

    Read the article

  • Platform for Efficiency: Boeing Defense, Space & Security integrates supply chain processes using Oracle Business Process Management solutions. by Fred Sandsmark

    - by JuergenKress
    Like most companies, aerospace giant Boeing has its jargon - words and phrases that uniquely define its products and processes. Take the word platform. It is used at Boeing to mean a family of aircraft - the F/A-18 fighter, for example, or the 777 jetliner. Boeing Defense, Space & Security since August 2009, employees in the Global Services & Support (GS&S) division of Boeing Defense, Space & Security have been talking about a different sort of platform: a supply chain technology platform, based on Oracle Business Process Management (Oracle BPM) solutions and Oracle SOA Suite. That platform, built with the assistance of Oracle Diamond Partner Capgemini, is serving as a jumping-off point for Boeing's GS&S staff to deploy radically improved business processes supported by Oracle Fusion Applications to build a high-visibility, end-to-end supply chain. This business process-driven technology platform has ambitious goals: to help GS&S respond more quickly and accurately to its customers' needs, to make business processes at all GS&S sites more consistent and less expensive, and to create a foundation for further improvement and efficiency. Read the full article here. Want to publish your BPM11g success story - request for a partner/customer reference? BPM Center of Excellent & First 100 Days of BPM documents to our SOA Community Workspace MWD_bpm_si_Centre_of_Excellence_0811.pdf First 100 Days of BPM whitepaper.pdf Please visit our SOA Community Workspace (SOA Community membership required). SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM reference,BPM Capgemini,BPM first 100 days,BPM center of Excellence,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Single Responsibility Principle - How Can I Avoid Code Fragmentation?

    - by Dean Chalk
    I'm working on a team where the team leader is a virulent advocate of SOLID development principles. However, he lacks a lot of experience in getting complex software out of the door. We have a situation where he has applied SRP to what was already quite a complex code base, which has now become very highly fragmented and difficult to understand and debug. We now have a problem not only with code fragmentation, but also encapsulation, as methods within a class that may have been private or protected have been judged to represent a 'reason to change' and have been extracted to public or internal classes and interfaces which is not in keeping with the encapsulation goals of the application. We have some class constructors which take over 20 interface parameters, so our IoC registration and resolution is becoming a monster in its own right. I want to know if there is any 'refactor away from SRP' approach we could use to help fix some of these issues. I have read that it doesn't violate SOLID if I create a number of empty courser-grained classes that 'wrap' a number of closely related classes to provide a single-point of access to the sum of their functionality (i.e. mimicking a less overly SRP'd class implementation). Apart from that, I cannot think of a solution which will allow us to pragmatically continue with our development efforts, while keeping everyone happy. Any suggestions ?

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • Do we really need a thousand Linux distributions?

    - by nebukadnezzar
    Pointed from an answer to a (possibly related) question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers, buttons, the kind of stuff most people probably wouldn't see as a reason to fork a Linux distribution. Of course, someone will always say "Opensource is also about the freedom of choice", and while I wholeheartedly agree, I do not believe that this is a valid reason to fork an already perfectly working Distribution into a new one, which might possibly result in less security/stability due to smaller group of developers. There's another problem: Those, who want to switch to Linux, are confronted with a neverending list of Linux distributions, and wonder rightfully which they're supposed to chose (infact, I was facing that problem before I've discovered Ubuntu). There might be (very few) valid reasons to fork a distribution: Specializing on a particular topic (FOSS Only, work-related topic (i.e., for a Hospital), etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, propietary technology, and such But even with these points in mind, it would still seem easier to create a subdistribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. So, why exactly do we need so many distributions?

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems?

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Do you share my concern? And if so, is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Running Non-profit Web Applications on Cloud/Dedicated Hosting [closed]

    - by cillosis
    Possible Duplicate: How to find web hosting that meets my requirements? I often times build web applications purely because I enjoy it. I like building useful tools or open source applications that don't come with a price tag. That being said, many of these applications can be quite complex requiring services beyond shared hosting (ex. specific PHP extensions). This leaves me with two options: Make the web application less complex and run on shared hosting. Fork out money for cloud or dedicated/VPS hosting. Considering the application is free (I don't make money off of it intentionally), the money for hosting comes out of my own pocket. I know I am not alone in this sticky situation. So the question is, what are the hosting options that provide more advanced features such as shell access via SSH, ability to install specific software/extensions (ex. if I wish to use a NoSQL DB such as Redis, MongoDB, or Cassandra), etc., at a free or low price point? I know free usually equates to bad/unreliable hosting -- but it's not always the case. There are a couple providers with free plans I know of: Amazon EC2 - Free micro-instance for 1 year AppHarbor - Cloud based .NET web application hosting w/ free plan. What else is available for hosting of non-profit applications?

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Smarphone Apps. music, licenses and fees .. nightmare

    - by mm24
    I have recently asked a question about music in games like Guitar Hero. I have found that that in Europe (at least) if I do want to use a track composed by a musician member of a royalty collecting society I need to pay a flat fee to the society and not only to the member. So a "one-to-one" agreement is not valid and the society can come up to me and ask me for money for each download. Even if for FREE! This is a fee sheet list of the UK agency: for fee, see "Permanent download services" It is about 1,200 GBP for less than 22,000 copies and they DON'T specify anything more and they said me on the phone that I need to wait and see how many downloads I get before knowing the price. This is kind of crazy as If I give away the App for free I will have to PAY 1,200 GBP!! I am shocked and I feel very bad. One agency suggested me to use a fake name of the artist, but in this way is not fair to my collaborators as what they hope is that the App gets lots of downloads and in this way that other people will get to know about them and hopefully commission them more work. The other solution is to work only with non registered musicians. The question here to you is.. has anyone found a legal way to do use music from registered authors in a game?

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Apple Giving Away $10,000 On The Occasion of 10 Billion App Store Downloads

    - by Gopinath
    Apple got yet another reason to brag it’s App Store success – nearly 10 billion apps  downloaded around the world. To show off this, they put a big banner over here   To celebrate the occasion, Apple will give $10,000 iTunes Gift Card to the person who downloads the 10th billion app, either free or paid app. Rules and regulations of the contest are available here. Apple’s App Store is a huge success and it’s at the centre of it’s smart devices eco system – iPhone, iPad and iPod. In April 2009 the store marked 1 billionth download, September 2009 2 billionth and in January 2010, 3 billionth app downloaded. In less than an year the number downloads has seen 3 fold growth and all set to reach 10 billions. This article titled,Apple Giving Away $10,000 On The Occasion of 10 Billion App Store Downloads, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • Functional Methods on Collections

    - by GlenPeterson
    I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a "fold" and "unfold" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >