Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 307/2465 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • How to Make Money Online Part 4 - Keyword Research is Critical

    Last time we looked at choosing the best products on ClickBank to promote, now we will look at keyword research, and how critical it is to make money online. You may have heard the term keyword research if you have been looking around for a while, but if you are looking to make or earn money online for the first time, I will give a brief description of what keyword research is and why it is so important.

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Showplan Operator of the Week - Merge Interval

    When Fabiano agreed to undertake the epic task of describing each showplan operator, none of us quite predicted the interesting ways that the series helps to understand how the query optimizer works. With the Merge Interval, Fabiano comes up with some insights about the way that the Query optimizer handles overlapping ranges efficiently. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Top 10 Browser Productivity Tips

    - by Renso
    Originally posted on: http://geekswithblogs.net/renso/archive/2013/10/14/top-10-browser-productivity-tips.aspxYou don’t have to be a geek to be a productive browser user. The tips below have been selected by actions users take most of the time to navigate a web-site but use long-standing keyboard or mouse actions to get them done, when there are keyboard short-cuts you can use instead. Since you hands are already on the keyboard it is almost always faster to sue a keyboard shortcut to get something done that you usually used the mouse for. For example right-clicking on something to copy it and then doing to same for pasting something is very time consuming, keyboard shortcuts have been created that simplify the task. All it takes are a few memory brain cells to remember them. Here are the tips, in no particular order:   Tip 1 Hold down the spacebar on your keyboard to page to the end of your web page rather than using your mouse. This is really a slow way of doing it. If you want to page one page at a time, hit the spacebar once, and again to page again. But if you want to page all the way to the end of the web page simply hit Ctrl+End (that is hold down the Ctrl key and hit the End button on your keyboard). To get to the top of your web page, simply hit Ctrl + Home to go all the way to the top of your web page. Tip 2 Where are my downloads? Some folks run downloads again-and-again because they do not know where the last one went and they do not see the popup, or browser note on their web page in the footer, etc. Simply hit Ctrl+J. Works in most browsers. Tip 3 Selecting a US state from a drop down box. Don’t use the mouse, takes just way too long to scroll. When you tab to the drop down box or click on it with your mouse, simply hit the first character of the state and it will be selected. For Texas for example hit the letter “T” twice on your keyboard to get to it. The same concept can be applied to any drop down box that is alphabetical or numerically sorted. Tip 4 Fixing spelling errors. All modern-day browsers support this now. You see the red wavy lines underscoring a word, yes it is a spelling error. How do you fix it? Don’t overtype it or try and fix it manually, fist right-click on it and a list of suggestions comes up. If it does not show up, like my name “Renso” and you know how to spell your name as in this example, look further down the list of options (the little window popup that appears when you right click) and you should see an option to “Add to Dictionary”. Be warned, when you add it, it only adds it to the browser you’re using’s dictionary. If you use Google Chrome, Firefox and IE, each one will have their own list. Tip 5 So you have trouble seeing the text on the screen. Or you are looking at a photo, for example in Facebook. You want to zoom in to read better or zoom into a photo a bit more. Hit Ctrl++ (hold down Ctrl key and hit the plus key – actually it’s the equal key but it is easier to remember that it is plus for bigger). Hit the minus to zoom out. Now you can’t remember what the original size was since you were so excited to hit it 20 times, or was that 21… Simply hit Ctrl+0 (that is zero) and it will reset it to the default. Tip 6 So you closed a couple of tabs in your browser. Suddenly you remember something you wanted to double-check something on one of the tabs, you cannot remember the URL ad the tab is gone forever, or is it? Simply hit Ctrl+Shift+t and it will bring back your tabs one by one each time you click the T. This has also been a great way for me to quickly close some tabs because I don’t want my boss to see I’m shopping and then hitting Ctrl+Shift+t to quickly get it back and complete my check-put and purchase. Or, for parents, when you walk into your daughter’s room and you see she quickly clicks and closes a window/tab in here browser. Not to worry my little darling, daddy will Ctrl+Shift+t and see what boys on Facebook you were talking too… Tip 7 The web browser is frozen on your PC/Laptop/Whatever, in this example it may be your Internet Explorer browser. I don’t mention Firefox or Chrome here because it probably never happens in their world. You cannot close it, it won’t respond to anything you have done s far except for the next step you are about to take, which is throw your two-day old coffee on your keyboard. This happens especially on sites that want to force you to complete a purchase order. Hit Ctrl+Alt+Del on your keyboard on any version of windows, select TASK MANAGER. In the  First Tab, which is the Process Tab, look for the item in question. In this example you should see Internet Explorer. Right-click it and select “End Task”. It will force the thread out of memory and terminate that process. You can of course do this with any program running under your account. Tip 8 This is a personal favorite of mine. To select words in the paragraph without using the mouse. You don’t want to select one character at a time like when you use the Ctrl+arrows as it can be very slow if you want to select a lot of text. You also want to select whole words. Simply use the Ctrl+Shift_arrow (right or left depending which direction you want to go. Tip 9 I was a bit reluctant to add this one, but being in the professional services industry still come across many-a-folk that simply can’t copy-and-paste them-all text or images that reside on them screens, y’all. Ctrl+c to copy and Ctrl+v to paste it. Works a lot faster than using the mouse. You may be asking: “Well why in the devil did they not use Ctrl+p for paste…. because that is for printing. This is of course not limited to the browser world, it applies to almost any piece of software running on PC or Mac. Go try it on an image on your browser, right-click it and select copy. Open a word document and Ctrl+v to paste the image in there. Please consider copyright laws. Tip 10 Getting rid of annoying ads. Now this only works when you load a web page, meaning when you get back to the same page later you will have to do this again and you will need to learn a tool to do it, WELL WORTH IT. For example, I use GrooveShark to listen to music but I don’t like the ads they show. Install a tool like Firebug for Firefox or use the Ctrl+Shift+I on Chrome to bring up the developer toolbar. Shows at the bottom of the page. With Firefox, once you have installed Firebug as an add-on, a yellow bug should appear on the top right-hand-side of your browser, click on it to display the developer toolbar. You will need to learn how to use it, but once you know how to select an item/section on the window (usually just right-click the add you don’t want to see and select “Inspect Element”, the developer toolbar will appear (if not already there)) and then simply hit delete and it will remove the add from the screen. If you don’t know HTML you may need to play with it a bit, but once you understand how it works can open up a whole new world for you on how web pages actually work. If you can think of any others that have saved you a ton of time please let me know so I can add them to a top 99 list.

    Read the article

  • Thank you Geeks With Blogs for letting me join your community!

    - by GreeNTUG
    First, a link to the blog I can no longer edit because Office Live blew away my digital identity and so I can no longer log into it (the source of a loooong blog about protecting your digital identity sometime when I have more time and after it has played out to the end) http://greentug.spaces.live.com/ The following are the communities I participate in: Green & Sustainability.  I run a virtual user group on Green and Sustainability as it relates to developers and software architects.  It was located at greentug.groups.live.com, and we will need to find a new digital location for it, because I am locked out of that site as well. BizSpark Tampa Bay:  I run a BizSpark group for Microsoft technologists (meetup.com, search for BizSpark Tampa Bay) and speak at Code Camps about "No Better Time to Start Your Own Tech Business".  The meetup group facilitates a balanced presentation that is respectful to anyone wanting to start their own business, whether part-time or full-time, whether micro (just you), sustainable (grow to 2-25-ish, self-funded), high growth (get venture capital or other funding, grow it, sell it within 5 years, do it again), or hybrid (the new model going forward).  It is an "action" group, with assignments and homework if you want to get the most out of it.   At the end of a year you will either have your business on the path to where you want it to be, or you will know the steps you need to do to get it there. Women in Technology Have been participating in the Women in Technology community since 2008, my main interests in this area are mentoring women in the workplace to have them believe they can become geeks and double their income, and to mentor them with respect to starting and running their own business. Access 2010/SharePoint 2010.  This is a game-changer with respect to the Access community (the ap both devs and IT Pros love to hate, the other a-word that's not a fruit).  I conducted Lunch n Learns and Brunch n Learns around this topic before the Office 2010/SharePoint 2010 launch, and spoke on the topic at SharePoint Saturday Tampa in Nov 2009. Interested in learning more about: Using Silverlight HD Streaming out in the non-technical world (horses and equestrian sport).  Migrating to Access Web Services and VB .Net from VBA (see the Access 2010/SharePoint 2010 interest above) Windows Phone 7!  Exciting opportunities both for Green and Sustainability and for my "day job" of Environmental, Health & Safety (EHS). My day job is Environmental, Health & Safetey (EHS) consulting and software solutions, where that interfaces with the developer world is with respect to opportunities around Green and Sustainability, The SmartGrid and Juval Lowy's EnergyNet, both of which will require a lot of technology and software to make them work, The new Microsoft Partner competency for "Digital Home", and The Y2K kind of deadline around how managing chemicals in ERP systems is changing because of Global Harmonization, which hits the EU with a hard deadline on 11/30/10 (yes, this year), and hits the USA about 15 months later. Hope you enjoy my contributions to the digital geek community, and feel free to email me, [email protected] (the email leftover after my digital identity was blown away), and [email protected] (this one could go away at some future point) Best, Kathy Malone

    Read the article

  • Laptop Battery problem

    - by sudhin
    My Compaq Presario CQ42 laptop battery is not working well with Ubuntu 12.04. But there is no problem in Ubuntu 10.04. Sometimes it shows percentage, sometimes remaining time, but charge reduces suddenly if I close the lid. The battery's charge reduces as follows: 100% charged battery after two minutes shows 1hr:30min remaining then after 2 minutes time remaining is 1hr:24min after 10mins if i just close lid and opened then shows 64% remaining ..... How can I solve this problem?

    Read the article

  • Question about SDLC. How to answer this?

    - by pirzada
    I have seen this asked many times in job interviews but I am still not sure how to answer this. I am a web developer for quite some time but I still have problem with explaining OOP and SDLC (Familiar with system development life cycle) . How to prepare for above 2 topics for an interview point of view. Still I use both all the time during development. I am not clear on OOP SDLC Is there any simplest answer to both of these? Thanks

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Algorithm for Shortest Job First with Preemption

    - by Shray
    I want to implement a shortest job first routine using C# or C++. Priority of Jobs are based on their processing time. Jobs are processed using a binary (min) heap. There are three types of jobs. Type 1 is when jobs come in between every 4-6 seconds, with processing times between 4-6. Type 2 job comes in between 8-12 seconds, with processing times between 8-12. Type 3 job comes in between 24-26 seconds, with processing times between 14-16. So far, I have written the binary heap functionality, but Im kinda confused on how to start processing spawn and also the processor. #include <iostream> #include <stdlib.h> #include <time.h> using namespace std; int timecounting = 20; struct process{ int atime; int ptime; int type; }; class pque{ private: int count; public: process pheap[100]; process type1[100]; process type2[100]; process type3[100]; process type4[100]; pque(){ count = 0; } void swap(int a, int b){ process tempa = pheap[a]; process tempb = pheap[b]; pheap[b] = tempa; pheap[a] = tempb; } void add(process c){ int current; count++; pheap[count] = c; if(count > 0){ current = count; while(pheap[count/2].ptime > pheap[current].ptime){ swap(current/2, current); current = current/2; } } } void remove(){ process temp = pheap[1]; // saves process to temporary pheap[1] = pheap[count]; //takes last process in heap, and puts it at the root int n = 1; int leftchild = 2*n; int rightchild = 2*n + 1; while(leftchild < count && rightchild < count) { if(pheap[leftchild].ptime > pheap[rightchild].ptime) { if(pheap[leftchild].ptime > pheap[n].ptime) { swap(leftchild, n); n = leftchild; int leftchild = 2*n; int rightchild = 2*n + 1; } } else { if(pheap[rightchild].ptime > pheap[n].ptime) { swap(rightchild, n); n = rightchild; int leftchild = 2*n; int rightchild = 2*n + 1; } } } } void spawn1(){ process p; process p1; p1.atime = 0; int i = 0; srand(time(NULL)); while(i < timecounting) { p.atime = rand()%3 + 4 + p1.atime; p.ptime = rand()%5 + 1; p1.atime = p.atime; p.type = 1; type1[i+1] = p; i++; } } void spawn2(){ process p; process p1; p1.atime = 0; srand(time(NULL)); int i = 0; while(i < timecounting) { p.atime = rand()%3 + 9 + p1.atime; p.ptime = rand()%5 + 6; p1.atime = p.atime; p.type = 2; type2[i+1] = p; i++; } } void spawn3(){ process p; process p1; p1.atime = 0; srand(time(NULL)); int i = 0; while(i < timecounting) { p.atime = rand()%3 + 25 + p1.atime; p.ptime = rand()%5 + 11; p1.atime = p.atime; p.type = 3; type3[i+1] = p; i++; } } void spawn4(){ process p; process p1; p1.atime = 0; srand(time(NULL)); int i = 0; while(i < timecounting) { p.atime = rand()%6 + 30 + p1.atime; p.ptime = rand()%5 + 8; p1.atime = p.atime; p.type = 4; type4[i+1] = p; i++; } } void processor() { process p; process p1; p1.atime = 0; int n = 1; int n1 = 1; int n2 = 1; for(int i = 0; i<timecounting;i++) { if(type1[n].atime == i) { add(type1[n]); n++; } if(type2[n1].atime == i) { add(type1[n1]); n1++; } if(type3[n2].atime == i) { add(type1[n2]); n2++; } /* if(pheap[1].atime <= i) { while(pheap[1].atime != 0){ pheap[1].atime--; i++; } remove(); }*/ } } };

    Read the article

  • How did you get good practices for your OOP designs?

    - by Darf Zon
    I realized I have a difficulty creating OOP designs. I spent many time deciding if this property is correctly set it to X class. For example, this is a post which has a few days: http://codereview.stackexchange.com/questions/8041/how-to-improve-my-factory-design I'm not convinced of my code. So I want to improve my designs, take less time creating it. How did you learn creating good designs? Some books that you can recommend me?

    Read the article

  • Install build-essentials in ubuntu 12.04

    - by Mukul Shukla
    After I install a fresh copy of ubuntu and I need to install build-essentials, I have to type: sudo apt-get update and sudo apt-get upgrade before installing build-essentials These two commands take a LOOTTT of time and install many things. Is there a way to install build-essentials without running these two commands, or a way that these two commands don't install all the updates and hence will take less time.

    Read the article

  • TDD vs. Productivity

    - by Nairou
    In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of "getting the damn project finished". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation?

    Read the article

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • Disk IO causing high load on Xen/CentOS guest

    - by Peter Lindqvist
    I'm having serious issues with a xen based server, this is on the guest partition. It's a paravirtualized CentOS 5.5. The following numbers are taken from top while copying a large file over the network. If i copy the file another time the speed decreases in relation to load average. So the second time it's half the speed of the first time. It needs some time to cool off after this. Load average slowly decreases until it's once again usable. ls / takes about 30 seconds. top - 13:26:44 up 13 days, 21:44, 2 users, load average: 7.03, 5.08, 3.15 Tasks: 134 total, 2 running, 132 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 25.3%id, 74.5%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 1048752k total, 1041460k used, 7292k free, 3116k buffers Swap: 2129912k total, 40k used, 2129872k free, 904740k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1506 root 10 -5 0 0 0 S 0.3 0.0 0:03.94 cifsd 1 root 15 0 2172 644 556 S 0.0 0.1 0:00.08 init Meanwhile the host is ~0.5 load avg and steady over time. ~50% wait Server hardware is dual xeon, 3gb ram, 170gb scsi 320 10k rpm, and shouldn't have any problems with copying files over the network. disk = [ "tap:aio:/vm/dev01.img,xvda,w" ] I also get these in the log INFO: task syslogd:1350 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. syslogd D 00062E4F 2208 1350 1 1353 1312 (NOTLB) c0ef0ed0 00000286 6e71a411 00062e4f c0ef0f18 00000009 c0f20000 6e738bfd 00062e4f 0001e7ec c0f2010c c181a724 c1abd200 00000000 ffffffff c0ef0ecc c041a180 00000000 c0ef0ed8 c03d6a50 00000000 00000000 c03d6a00 00000000 Call Trace: [<c041a180>] __wake_up+0x2a/0x3d [<ee06a1ea>] log_wait_commit+0x80/0xc7 [jbd] [<c043128b>] autoremove_wake_function+0x0/0x2d [<ee065661>] journal_stop+0x195/0x1ba [jbd] [<c0490a32>] __writeback_single_inode+0x1a3/0x2af [<c04568ea>] do_writepages+0x2b/0x32 [<c045239b>] __filemap_fdatawrite_range+0x66/0x72 [<c04910ce>] sync_inode+0x19/0x24 [<ee09b007>] ext3_sync_file+0xaf/0xc4 [ext3] [<c047426f>] do_fsync+0x41/0x83 [<c04742ce>] __do_fsync+0x1d/0x2b [<c0405413>] syscall_call+0x7/0xb ======================= I have tried disabling irqbalanced as suggested here but it does not seem to make any difference.

    Read the article

  • Spam and You

    Currently, we all get and post electronic mail just as a lot and as effortlessly as we use the telephone. In reality, e mail is from time to time a lot more favored that applying a telephone, even a ... [Author: Whinston Sparks - Computers and Internet - April 12, 2010]

    Read the article

  • December 2012 Cumulative Updates are available for SQL Server 2008 R2

    - by AaronBertrand
    Microsoft released new cumulative updates for SQL Server. SQL Server 2008 R2 Service Pack 1 Cumulative Update # 10 KB Article: KB #2783135 16 fixes are listed at the time of publication Build number is 10.50.2868 Relevant for @@VERSION 10.50.2500 through 10.50.2867 SQL Server 2008 R2 Service Pack 2 Cumulative Update # 4 KB Article: KB #2777358 34 fixes are listed at time of publication Build number is 10.50.4270 Relevant for @@VERSION 10.50.4000 through 10.50.4269 My usual disclaimer: these updates...(read more)

    Read the article

  • ArchBeat Link-o-Rama for 11/17/2011

    - by Bob Rhubart
    Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c | Richard Rotter Richard Rotter demonstrates "how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing." Article: Social + Lean = Agile | Dave Duggal In today’s increasingly dynamic business environment, organizations must continuously adapt to survive. Change management has become a major bottleneck. Organizations’ need a practical mechanism for managing controlled variance and change in-flight to break the logjam. This paper provides a foundation for applying lean and agile principles to achieve Enterprise Agility through social collaboration. Stress Testing Java EE 6 Applications - Free Article In Free Java Magazine : Adam Bien "It is strange," says Adam Bien, "everyone is obsessed about green bars and code coverage, but testing of multi threaded behavior is widely ignored - until the applications run into massive problems." Using Access Manager to Secure Applications Deployed on WebLogic | Rene van Wijk Another great how-to post from Oracle ACE Rene van Wijk, this time involving JBoss RichFaces, Facelets, Oracle Coherence, and Oracle WebLogic Server. DOAG 2011 vs. Devoxx - Value and Attraction | Markus Eisele Oracle ACE Director Markus Eisele compares and contrasts these popular conferences with the aim of helping others decide which to attend. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Webcast: Oracle Business Intelligence Mobile Event Date: Wednesday, December 7, 2011 Time: 10 a.m. PT/1 p.m. ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI and Analytics Practice, Ascentt). Webcast: Maximum Availability on Private Clouds A discussion of Oracle’s Maximum Availability Architecture, Oracle Database 11g, Oracle Exadata Database Machine, and Oracle Database appliance, featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). November 30, 2011 at 10:00am PT / 1:00pm ET. Oracle Technology Network Architect Day - Phoenix, AZ Wednesday December 14, 2011, 8:30am - 5:00pm. The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016. Registration is free, but seating is limited.

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • Should generated documentation go in version control history?

    - by dukeofgaming
    I'm against compiled stuff going into version control, specially when it comes to compiled binaries, however, my principles are now in question after adding doxygen support for a project. Should the hundreds of files generated by doxygen go into version control?, what is the recommended practice here?, I think the ideal would be automating the process in a server that publishes that documentation at the same time, however, there is no such server now nor there will be for some time.

    Read the article

  • contracting . . .

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • Theory Of A Weird Thought - Forms Submission

    - by user2738336
    In theory, if you were to open two computers that were perfectly synced together on a website that has a form. This form has fields where say for example the username has to be unique. Assuming both computers have the same information on the form, and in theory let's say that the submit button was pressed at the same time, and that these two computers have the exact same build and internet speed and the same response time from the server, whose information would be submitted to the database and whose information would be denied knowing the username field is unique.

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >