Search Results

Search found 341 results on 14 pages for 'debate'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Appropriate Footwear for An Interview

    - by EoRaptor013
    There's a raging debate going on at my house about appropriate footwear for an IT interview. I have an interview, on Thursday, for a SQL/C# developer with the Fraud dept. at a large accounting firm. I was planning on wearing what I have pretty much always worn for an interview: a nice suit, white shirt, subdued tie, and a pair of dress cowboy boots. My spouse and daughter both know that my dress code for nearly every professional job I've ever gotten, is pretty much the same -- including the boots -- with what I just described. Now, however, because I've been out of work for an unfortunately long time (my last contract ended 03/09 -- pretty much coincidental with the bottom falling out of the economy). My wife insists that style standards are fundamentally different on the left side of the Mississippi vs. the right side of the river. My view is that I've always worn "cowboy" boots; since I was old enough to fit into a real pair. I moved East, as an adult, over 30 years ago, but my dress patterns haven't changed. And in all that time, my dress patterns have never changed. Now I both really want, and really need, this job. But, is that sufficient reason to change a habit 40 years in the making? I would really appreciate the thoughts ya'all (little West of Ms. colloquialism, there) might have on this matter. Thanks. P.S. If this sort of question is inappropriate for this form, I apologize.

    Read the article

  • I can learn either C or Java, which one should I choose first? Should I take them concurrently?

    - by GR1000
    I realize this is a subject of hot debate, but I'm interested in opinions that relate to my specific situation. I want to learn the basics and fundamentals of programming, so I'm already taking a college course in general programming concepts. It isn't covering a specific language, but it's giving me a solid foundation that I can build upon when I move on to a class that teaches a specific language. My two options for a specific language are Java and C because those are the two languages taught at the college I want to take classes from. What I want to do is learn a complex language so that I can apply that knowledge to languages that I use, or will eventually use, in my current job building web pages: XHTML, CSS, JavaScript, PHP, XML, ActionScript. I'm not necessariy interested in becoming a Java developer or a C developer in the immediate future, but I do have aspirations of developing web applications and iPod/iPhone applications. So, basically, I'm looking for answers to these questions and the reasoning behind them: Do I take the introductory course in Java first, and then take the intro course in C, or Do I take C first and then take Java? Is there any reason not to take them concurrently? Should I skip C altogether as Java covers everything I need to know? EDIT: Thanks everyone for your thoughtful and insightful responses.

    Read the article

  • Rails Browser Detection Methods

    - by alvincrespo
    Hey Everyone, I was wondering what methods are standard within the industry to do browser detection in Rails? Is there a gem, library or sample code somewhere that can help determine the browser and apply a class or id to the body element of the (X)HTML? Thanks, I'm just wondering what everyone uses and whether there is accepted method of doing this? I know that we can get the user.agent and parse that string, but I'm not sure if that is that is an acceptable way to do browser detection. Also, I'm not trying to debate feature detection here, I've read multiple answers for that on StackOverflow, all I'm asking for is what you guys have done. [UPDATE] So thanks to faunzy on GitHub, I've sort of understand a bit about checking the user agent in Rails, but still not sure if this is the best way to go about it in Rails 3. But here is what I've gotten so far: def users_browser user_agent = request.env['HTTP_USER_AGENT'].downcase @users_browser ||= begin if user_agent.index('msie') && !user_agent.index('opera') && !user_agent.index('webtv') 'ie'+user_agent[user_agent.index('msie')+5].chr elsif user_agent.index('gecko/') 'gecko' elsif user_agent.index('opera') 'opera' elsif user_agent.index('konqueror') 'konqueror' elsif user_agent.index('ipod') 'ipod' elsif user_agent.index('ipad') 'ipad' elsif user_agent.index('iphone') 'iphone' elsif user_agent.index('chrome/') 'chrome' elsif user_agent.index('applewebkit/') 'safari' elsif user_agent.index('googlebot/') 'googlebot' elsif user_agent.index('msnbot') 'msnbot' elsif user_agent.index('yahoo! slurp') 'yahoobot' #Everything thinks it's mozilla, so this goes last elsif user_agent.index('mozilla/') 'gecko' else 'unknown' end end return @users_browser end

    Read the article

  • Where to give feedback on a new Feature is SO ?

    - by justjoe
    After using SO, for sometimes, i realize i got some problems. zen masters fighting each other on one of my question. Every side have their own arguments and Everybody seem right. Frankly, this make me confuse : How can i choose somebody's answer where personally i don't know the right answer. So, i would like to propose a feature 'i choose this because' for a user who asked. So at least he/she explain why he choose the particular answer. Maybe it's silly, but as somebody who getting helps from other's answer, i would like to know everybody get what they deserve. Usually in this kind of situation, i just upvote every good answer and check the one i think the right one. Second feature : every hot question always got plenty answer and comment. And if among person who answer it, start to debate then it will become a little bit hectic. Right now a page only have ability to sort answer based on oldest, newest votest. So, is it possible to make a new sort based on timeline what make comment and answer collide. I believe it will be more easy to read. this feature can only see by the person who create the question, or also for public.

    Read the article

  • Element Content Versus Attribute for Simple XML Value

    - by MB
    I know the elements versus attributes debate has come up many times here and elsewhere (e.g. here, here, here, here, and here) but I haven't seen much discussion of elements versus attributes for simple property values. So which of the following approaches do you think is better for storing a simple value? A: Value in Element Content: <TotalCount>553</TotalCount> <CelsiusTemperature>23.5</CelsiusTemperature> <SingleDayPeriod>2010-05-29</SingleDayPeriod> <ZipCodeLocation>12203</ZipCodeLocation> or B: Value in Attribute: <TotalCount value="553"/> <CelsiusTemperature value="23.5"/> <SingleDayPeriod day="2010-05-29"/> <ZipCodeLocation code="12203"/> I suspect that putting the value in the element content (A) might look a little more familiar to most folks (though I'm not sure about that). Putting the value in an attribute (B) might use less characters, but that depends on the length of the element and attribute names. Putting the value in an attribute (B) might be more extensible, because you could potentially include all sorts of extra information as nested elements. Whereas, by putting the value inside the element content (A), you're restricting extensibility to adding more attributes. But then extensibility often isn't a concern for really simple properties - sometimes you know that you'll never need to add additional data. Bottom line might be that it simply doesn't matter, but it would still be great to hear some thoughts and see some votes for the two options.

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • How to inject php code from database into php script ?

    - by luxquarta
    I want to store php code inside my database and then use it into my script. class A { public function getName() { return "lux"; } } // instantiates a new A $a = new A(); Inside my database there is data like "hello {$a->getName()}, how are you ?" In my php code I load the data into a variable $string $string = load_data_from_db(); echo $string; // echoes hello {$a->getName()}, how are you ? So now $string contains "hello {$a-getName()}, how are you ?" {$a-getName()} still being un-interpretated Question: I can't find how to write the rest of the code so that {$a-getName()} gets interpretated "into hello lux, how are you". Can someone help ? $new_string = ?????? echo $new_string; //echoes hello lux, how are you ? Is there a solution with eval() ? (please no debate about evil eval ;)) Or any other solution ?

    Read the article

  • Should try...catch go inside or outside a loop?

    - by mmyers
    I have a loop that looks something like this: for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } This is the main content of a method whose sole purpose is to return the array of floats. I want this method to return null if there is an error, so I put the loop inside a try...catch block, like this: try { for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } } catch (NumberFormatException ex) { return null; } But then I also thought of putting the try...catch block inside the loop, like this: for(int i = 0; i < max; i++) { String myString = ...; try { float myNum = Float.parseFloat(myString); } catch (NumberFormatException ex) { return null; } myFloats[i] = myNum; } So my question is: is there any reason, performance or otherwise, to prefer one over the other? EDIT: The consensus seems to be that it is cleaner to put the loop inside the try/catch, possibly inside its own method. However, there is still debate on which is faster. Can someone test this and come back with a unified answer? (EDIT: did it myself, but voted up Jeffrey and Ray's answers)

    Read the article

  • Do vs. Run vs. Execute vs. Perform verbs

    - by coffeeaddict
    Before anyone starts to go nuts and red flag this post saying this is "Subjective" which drives me absolutely nuts because everyone has their own intent why they are posting something others feel are subjective. Subjective is subjective to each person, how about that! So with that let me tell you a couple things so that this post does not get flagged by flag happy moderators: 1) There are community guidlines on specific keywords recommended by certain organizations or people (e.g. Microsoft, Lance Hunt, etc.) 2) I want to know what others are using the most and why. Why they feel this verb reads better than others 3) Books even talk about this verb issue (Uncle Bob, etc.), so it's not subjective Now to my actual question: a) What list of verbs are you using for method names? What's your personal or team standard? b) I debate whether to use Do vs. Run vs. Execute vs. Perform and am wondering if any of these are no longer recommended or some that people just don't really use and I should just scratch them. Basically any one of those verbs mean the same thing...to invoke some process (method call). This is outside of CRUDs. For example: ExecutePayPalWorkflow(); that could be also any one of these names instead: DoPayPalWorkflow(); RunPayPalWorkflow(); PerformPayPalWorkflow(); or does it not really matter...because any of those verbs pretty much are understandable as to "what" shows your intent by the other words that follow it "PayPalWorkflow" This discussion can go for any language. I just put the two main tags C# and Java here which is good enough for me to get some solid answers or experiences.

    Read the article

  • What should a hobbyist do to develop good programming skills after basics?

    - by thyrgle
    So I'll say right here that I'm no professional coder. I'm a hobbyist. And pretty much like other people I feel like I'm doing it wrong. Like this question A feeling that I'm not a good programmer if have began to feel like that. Now I know basically that they say you shouldn't worry and that your good even if you continuously doubt yourself. But, they are talking to him. I'm not like him (in the sense I'm more of a newbie)... I've been coding as a hobbyist for 3 years (3 hobbyist years mind you!) unlike his 10-11 years that he states. Also, the only thing I've probably read in-depth is Teach Yourself C++ in 21 Days. And before I continue, just so your not confused about the various questions I've posted on (mostly) iPhone and OpenGL, I have poked and prodded at those two things for a few months each and finally sort of got a hang of both of them. But, from what I've noticed, is that I suck at making good code. For me its not even a debate of whether I'm doing it wrong or not: I can tell (from the various spaghetti code I create and other various discrepancies I, and others, can see and have noted in my code). What is a good way to get rid of these awful habits of mine and do it in a more correct, or if there is no "correct way" then I mean "typical", way?

    Read the article

  • Performing measures within the execution of a c++ code every t milliseconds

    - by user506901
    Given a while loop and the function ordering as follows: int k=0; int total=100; while(k<total){ doSomething(); if(approx. t milliseconds elapsed) { measure(); } ++k; } I want to perform 'measure' every t-th milliseconds. However, since 'doSomething' can be close to the t-th millisecond from the last execution, it is acceptable to perform the measure after approximately t milliseconds elapsed from the last measure. My question is: how could this be achieved? One solution would be to set timer to zero, and measure it after every 'doSomething'. When it is withing the acceptable range, I perform measures, and reset. However, I'm not which c++ function I should use for such a task. As I can see, there are certain functions, but the debate on which one is the most appropriate is outside of my understanding. Note that some of the functions actually take into account the time taken by some other processes, but I want my timer to only measure the time of the execution of my c++ code (I hope that is clear). Another thing is the resolution of the measurements, as pointed out below. Suppose the medium option of those suggested.

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • What's your take on this Javascript thingy?

    - by Nischal
    We've been having a discussion at our workplace on this with some for and some against the behavior. Wanted to hear views from you guys : <html> <body> <div> Test! <script> document.body.removeChild(document.getElementsByTagName('div')[0]); </script> </div> </body> </html> Should the above script work and do what it's supposed to do? First, let's see what's happening here : I have a javascript that's inside the <div> element. This javascript will delete the child node within body which happens to hold the div inside which the script itself exists. Now, the above script works fine in Firefox, Opera and IE8. But IE6 and IE7 give an alert saying they cannot open the page. Let's not debate on how IE should have handled this (they've accepted it as a bug and hence fixed it in IE8). The point here is since the 'SCRIPT' tag itself is a part of DOM, should it be allowed to do something like this? Should it even exist after such an operation?

    Read the article

  • Sequence Point and Evaluation Order( Preincrement)

    - by Josh
    There was a debate today among some of my colleagues and I wanted to clarify it. It is about the evaluation order and the sequence point in an expression. It is clearly stated in the standard that C/C++ does not have a left-to-right evaluation in an expression unlike languages like Java which is guaranteed to have a sequencial left-to-right order. So, in the below expression, the evaluation of the leftmost operand(B) in the binary operation is sequenced before the evaluation of the rightmost operand(C): A = B B_OP C The following expression according, to CPPReference under the subsection Sequenced-before rules(Undefined Behaviour) and Bjarne's TCPPL 3rd ed, is an UB x = x++ + 1; It could be interpreted as the compilers like BUT the expression below is said to be clearly a well defined behaviour in C++11 x = ++x + 1; So, if the above expression is well defined, what is the "fate" of this? array[x] = ++x; It seems the evaluation of a post-increment and post-decrement is not defined but the pre-increment and the pre-decrement is defined. NOTE: This is not used in a real-life code. Clang 3.4 and GCC 4.8 clearly warns about both the pre- and post-increment sequence point.

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • Code casing question for private class fields

    - by user200295
    Take the following example public class Class1{ public string Prop1{ get {return m_Prop1;} set {m_Prop1 = value; } } private string m_Prop1; // this is standard private property variable name // how do we cap this variable name? While the compiler can figure out same casing // it makes it hard to read private Class2 Class2; // we camel case the parameter public Class1(Class2 class2){ this.Class2 = class2; } } Here are my stock rules The class name is capitalized (Class1) The public properties are capitalized (Prop1) The private field tied to a public property has m_ to indicate this. My coworker prefers _ There is some debate if using m_ or _ should be used at all, as it is like Hungarian notation. Private class fields are capitalized. The part I am trying to figure out is what do I do if when the Class name of a private field matches the private field name. For example, private Class2 Class2; This is confusing. If the private field name is not the same class, for example private string Name; , there isn't much issue. Or am I thinking about the issue wrong. Should my classes and private fields be named in such a way that they don't collide?

    Read the article

  • Should a "script" tag be allowed to remove itself?

    - by Nischal
    We've been having a discussion at our workplace on this with some for and some against the behavior. Wanted to hear views from you guys : <html> <body> <div> Test! <script> document.body.removeChild(document.getElementsByTagName('div')[0]); </script> </div> </body> </html> Should the above script work and do what it's supposed to do? First, let's see what's happening here : I have a javascript that's inside the <div> element. This javascript will delete the child node within body which happens to hold the div inside which the script itself exists. Now, the above script works fine in Firefox, Opera and IE8. But IE6 and IE7 give an alert saying they cannot open the page. Let's not debate on how IE should have handled this (they've accepted it as a bug and hence fixed it in IE8). The point here is since the 'SCRIPT' tag itself is a part of DOM, should it be allowed to do something like this? Should it even exist after such an operation? Edit: Firefox, Opera, IE9 etc. do not remove the 'script' tag if I run the above code. But, document.getElementsByTagName('script').length returns 0! To understand what I mean, add alert(document.getElementsByTagName('script').length); before and after document.body.removeChild(document.getElementsByTagName('div')[0]); in the code above.

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Does a router have a receiving range?

    - by Aadit M Shah
    So my dad bought a TP-Link router (Model No. TL-WA7510N) which apparently has a transmitting range of 1km; and he believes that it also has a receiving range of 1km. So he's arguing with me that the router (which is a trans-receiver) can communicate with any device in the range of 1km whether or not that device has a transmitting range of 1km. To put it graphically: +----+ 1km +----+ | |------------------------------------------------->| | | TR | | TR | | | <----| | +----+ 100m+----+ So here's the problem: The two devices are 1km apart. The first device has a transmitting range of 1km. The second device only has a transmitting range of 100m. According to my dad the two devices can talk to each other. He says that the first device has a transmitting and a receiving range of 1km which means that it can both send data to devices 1km away and receive data from devices 1km away. To me this makes no sense. If the second device can only send data to devices 100m away then how can the first device catch the transmission? He further argues that for bidirectional communication both the sender and the reciver should have overlapping areas of transmission: According to him if two devices have an overlapping area of transmission then they can communicate. Here neither device has enough transmission power to reach the other. However they have enough receiving power to capture the transmission. Obviously this makes absolutely no sense to me. How can a device sense a transmission which hasn't even reached it yet and go out, capture it and bring it back it. To me a trans-receiver only has a transmission power. It has zero receiving power. Hence for two devices to be able to communicate bidirectionally, the diagram should look like: Hence, from my point of view, both the devices should have a transmission range far enough to reach the other for bidirectional communication to be possible; but no matter how much I try to explain to my dad he adamantly disagrees. So, to put an end to this debate once and for all, who is correct? Is there even such a thing as a receiving range? Can a device fetch a transmission that would otherwise never reach it? I would like a canonical answer on this.

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • Commercial Drupal Modules & Themes

    - by Ravish
    A discussion at Drupal.org forums prompted me to give my input about commercial ecosystem around Open Source Content Management Systems. WordPress and Joomla have been growing rapidly since past few years. But, growth rate of Drupal seems to be almost flat. Despite being the most powerful CMS around, Drupal is still not being adopted by masses. Many people will argue that Drupal is not targeted towards masses, but developers. I agree, Drupal is more of a development platform than a consumer CMS. Drupal is ‘many things to many people’, and I can build almost any type of website with it. Drupal is being used for building blogs, corporate websites, Intranet portals, social networking and even a project management system. Looking at the wide array of Drupal implementations, it deserves to be the most widely adopted CMS. I believe there are few challenges that Drupal community needs to overcome. To understand these challenges, I surveyed some webmasters who use Joomla or WordPress but not Drupal. I asked them why they don’t want to use Drupal, following are the responses I got from them: Drupal is too complicated, takes time to learn. Drupal is great, but its admin panel is overwhelming. I couldn’t find any nice themes for Drupal. There is no WYSIWYG editor in Drupal. Most Drupal modules do not work out of the box. There aren’t enough modules like Ubercart which provides any out of the box functionality. I tried modules like CCK, Views and Panels. After wasting several hours struggling with them, I decided to give up on Drupal. I don’t use Drupal because of pushbutton and Garland theme. I had hard time trying to customize Garland and it messed up the whole layout. There are no premium modules and themes for Drupal. Joomla has tons of awesome themes and modules. I don’t want a million hacks like CCK, Views, Tokens, Pathauto, ImageCache and CTools just to run a simple website. Most of the complaints from users are related to the learning and development curve involved with Drupal, and the lack of ecosystem. While most of the problems will be gone in Drupal 7, ecosystem is something that needs to be built by the Drupal community. Drupal distributions are a great step forward. There are few awesome Drupal distributions available like Open Publish, Open Atrium and Drupal Commons. I predict, there will be a wave of many powerful Drupal distributions after Drupal 7 release. Many of them will be user-friendly and commercial supported. Following is my post at Drupal.org forums: Quote from: http://drupal.org/node/863776#comment-3313836 Brian Gardner (StudioPress) and Woo Themes launched premium WordPress themes in 2007, the developer community did not accept it at first. Moreover, they were not even GPL licensed. There was an outcry in WordPress community against them. Following that, most premium theme providers switched to GPL licensing. Despite controversies, users voted for premium theme and plugins by buying them. Inspired by their success, hundreds of other developers started to sell premium themes and plugins. It is now the acceptable and in fact most popular business model among WordPress community. Matt Mullenweg once told me, they would not support premium themes. If he supported, developers would no more give out free GPL themes & plugins. He pointed me towards Joomla, there were hardly any nice free themes & modules available. Now two years forward, premium products are not just accepted but embraced by the WordPress community – http://wordpress.org/extend/themes/commercial/ The quality and number of themes & modules has increased, even the free ones. This also helped to boost the adoption and ecosystem of WordPress. Today, state of Drupal is like WordPress was in 2007. There are hardly any out of the box solutions available for Drupal. Ubercart, Open Publish and Open Atrium are the only ones I can think of. Many of the popular Drupal modules are patches and hole-fillers. Thankfully, these hole-filler modules are going to be in Drupal 7 core. Drupal 7 and distributions will spawn a new array of solutions built upon Drupal. Soon, we will have more like Ubercarts and Open Atriums. If commercial solutions can help fuel this ecosystem and growth, Drupal community will accept them eventually. This debate will not stop your customers from buying your product. If your product is awesome, they will vote for you by buying your product.

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >