Search Results

Search found 17550 results on 702 pages for 'real world'.

Page 498/702 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • Problem booting Windows after failed installation Ubuntu 12.04 alongside Windows 7

    - by Tassos
    I tried to install in my laptop Ubuntu 12.04 so that I can dual-boot with Windows 7. I made some mistakes during this process and I didn't manage to install Ubuntu. But my real problem now is that I'm afraid that I also destroyed the installation of Windows 7. Your help would be precious for me. Here are the details of what I did: 1) I followed these instructions to dual boot Ubuntu 12.04 and Windows 7: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ The only difference from what is described above, is that in my case the device names where: /dev/mapper/isw_fdjdhbadc_Volume0* instead of: /dev/sda* Note that I had created a bootable USB stick to do that. 2) The installation proceeded normally, but in the end I got a fatal error because the grub-install failed. 3) Then, after googling this problem, I runned ubuntu from the USB stick and run this command: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0p5 (/isw_fdjdhbadc_Volume0p5 was the partition that I had made for /boot) but this command also failed. 4) Then, I did something stupid (I think): I run the above command as: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0 namely I tried to install grub in the device isw_fdjdhbadc_Volume0 instead of the boot partition isw_fdjdhbadc_Volume0p5 The above command did not fail and was executed ok. 5) After that, I tried to boot my laptop, but it seemed that I had no operation system. Not even windows were detected. 6) I thought that I should uninstall grub from isw_fdjdhbadc_Volume0. So following some online instructions that I found, I booted again Ubuntu from the USB stick and run the following command (this was stupid since the instructions were for a totally different case than mine): sudo dd if=/dev/zero of=/dev/mapper/isw_fdjdhbadc_Volume0 bs=446 count=1 Afte that, I was still unable to boot Windows. I realize that I deleted something that I shouldn't, but I'm hope that this is not crucial and I can recover somehow. When I boot Ubuntu from the USB, I can see that the partition with Windows is still there, with all the directories, Windows files, my data etc. So, my question is: Is there a way to undo the mistakes that I desribed above and recover Windows 7? This is my major question. After solving that, I'd also like to know what I did wrong with the installation of Ubuntu. Thanks in advance for you valuable help!

    Read the article

  • VLC will sometimes have issues displaying video in fullscreen. What could cause this? How would I troubleshoot the issue?

    - by George Marian
    Recently VLC has been having issues displaying video in fullscreen mode. AFAIK, nothing has changed with the video card drivers and it's certainly the same version of VLC. (/me shakes a fist at the repository maintainers) This has worked without issue in the past. In fact, I've had as many as 6 instances of VLC running, each playing a video. One was always fullscreen on my second monitor, while the others were tiled on my primary monitor. I was able to toggle any of the other 5 into fullscreen mode and the video displayed without issue. Lately, I've been having trouble running 2 instances in fullscreen mode. (Sometimes, even a single instance will not display the video in fullscreen.) VLC will continue to play the video, but in fullscreen mode I see nothing but a black screen. Sometimes, the video will display if I maximize the VLC window. Other times, I have to settle for a smaller sized window. I don't know if this is pertinent, but sometimes changing the min/max state of a Firefox window (Minefield, specifically) seemed to allow the troublesome instance to display the video in fullscreen mode. However, that did not prove to be a consistent workaround. Sometimes, it seemed that closing a Firefox window did the trick, though that isn't consistently successful either. (I futzed with Firefox, because with the crazy number of windows and tabs that I normally have open, it regularly hogs about 1 GB of RAM.) Another bit of funkiness that comes to mind is the fact that my secondary monitor is considered the primary on boot-up. I use xrandr to designate the real 1st monitor as primary after boot-up, as suggested by someone in a question I asked on the Unix & Linux SE site. Specs: Ubuntu 10.10 w/ Gnome and Compiz 8GB RAM AMD Phenom II 965 Black Edition Asus M4A79 Deluxe mobo XFX ATI Radeon HD 5750 w/ 1GB RAM VLC is configured to use the hardware overlay for video (as per the default setting) Does anyone have an idea what may cause this issue or how I may go about troubleshooting it? Update: Right now I have 2 instances of VLC playing, each in fullscreen mode on a separate monitor. This is what I see:

    Read the article

  • American Modern Insurance Group recognized at 2010 INN VIP Best Practices Awards

    - by [email protected]
    Below: Helen Pitts (right), Oracle Insurance, congratulates Bruce Weisgerber, Munich Re, as he accepts a VIP Best Practices Award on behalf of American Modern Insurance Group.     Oracle Insurance Senior Product Marketing Manager Helen Pitts is attending the 2010 ACORD LOMA Insurance Forum this week at the Mandalay Bay Resort in Las Vegas, Nevada, and will be providing updates from the show floor. This is one of my favorite seasons of the year--insurance trade show season. It is a time to reconnect with peers, visit with partners, make new industry connections, and celebrate our customers' achievements. It's especially meaningful when we can share the experience of having one of our Oracle Insurance customers recognized for being an innovator in its business and in the industry. Congratulations to American Modern Insurance Group, part of the Munich Re Group. American Modern earned an Insurance Networking News (INN) 2010 VIP Best Practice Award yesterday evening during the 2010 ACORD LOMA Insurance Forum. The award recognizes an insurer's best practice for use of a specific technology and the role, if feasible, that ACORD data standards played as a part of their business and technology. American Modern received an Honorable Mention for leveraging the Oracle Documaker enterprise document automation solution to: Improve the quality of communications with customers in high value, high-touch lines of business Convert thousands of page elements or "forms" from their previous system, with near pixel-perfect accuracy Increase efficiency and reusability by storing all document elements (fonts, logos, approved wording, etc.) in one place Issue on-demand documents, such as address changes or policy transactions to multiple recipients at once Consolidate all customer communications onto a single platform Gain the ability to send documents to multiple recipients at once, further improving efficiency Empower agents to produce documents in real time via the Web, such as quotes, applications and policy documents, improving carrier-agent relationships Munich Re's Bruce Weisgerber accepted the award on behalf of American Modern from Lloyd Chumbly, vice president of standards at ACORD. In a press release issued after the ceremony Chumbly noted, "This award embodies a philosophy of efficiency--working smarter with standards, these insurers represent the 'best of the best' as chosen by a body of seasoned insurance industry professionals." We couldn't agree with you more, Lloyd. Congratulations again to American Modern on your continued innovation and success. You're definitely a VIP in our book! To learn more about how American Modern is putting its enterprise document automation strategy into practice, click here to read a case study. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • gcc segmentation fault on Ubuntu 12.04

    - by Yuval F
    I am trying to compile a C program on Ubuntu precise 12.04. Here's the program: #include <stdio.h> int main(int argc, char** argv) { printf("Hello World!"); return 0; } My gcc version is 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5). Initially it did not find cc1 so I added a soft link. Now I get this message when I try to compile: gcc: internal compiler error: Segmentation fault (program cc1) Compiling the same program with g++ works fine. I tried reinstalling build-essential, but to no avail. What am I missing? EDIT: I tried reinstalling according to @gertyvdijk's suggestion. As it did not help, here is the output of apt-cache policy gcc-4.6: gcc-4.6: Installed: 4.6.3-1ubuntu5 Candidate: 4.6.3-1ubuntu5 Version table: *** 4.6.3-1ubuntu5 0 500 http://il.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages 100 /var/lib/dpkg/status and the output of ls -l /usr/bin/gcc: lrwxrwxrwx 1 root root 7 Mar 13 2012 /usr/bin/gcc -> gcc-4.6 EDIT #2: here's a verbose compiler output: gcc -v aaa.c Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/lib/gcc/x86_64-linux-gnu/4.6/cc1 -quiet -v -imultilib . -imultiarch x86_64-linux-gnu aaa.c -quiet -dumpbase aaa.c -mtune=generic -march=x86-64 -auxbase aaa -version -fstack-protector -o /tmp/ccHfcXMs.s gcc: internal compiler error: Segmentation fault (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions.

    Read the article

  • Antenna Aligner Part 10: Updates and emails…

    - by Chris George
    Since my last post back in July, I’ve not done huge amounts of work on my app for two reasons. Firstly, no time! Secondly, I wanted to leave it out in the wild for a while and see what happened. Well, what happened?  over 1,300 users, that’s what’s happened!  This uptake is beyond my wildest expectations, and apart from a couple of issues that I’ll mention in a minute, most of the feedback has been very positive indeed! I’ve had several emails giving me feedback and reporting issues, all of which I have made a point of replying to immediately. This act alone has met with favourable replies! One of the main issues was with iPad. So it turns out that my app is only accurate in portrait mode. Turning it into landscape will offset the direction by +-90degrees! Whoops! I think I’ve fixed this by disabling the orientation switching, but I have not yet had an iPad to test this on. I had several emails from iPod Touch users claiming the app did not work for them. Specifically, the compass view did not work. On investigation, it turns out that the iPod Touch does not have the compass hardware required to do this. Unfortunately there is no way to exclude iPod Touch’s from the list of supported devices, so I’ve just had to make it very clear in the itunes description that the device is not fully supported.  You can still get the list of transmitters, but you then have to use a real compass to get the bearing. But that’s not the end of the world. Several customers have requested the aerial polarisation to be displayed in the app. I was already working on this, and the data was already there, it was just a case of displaying this in the UI. I have a solution now, and this will be in the next release. Of course, with the Digital switchover in full swing across the UK, there have been one set of data updates (in 1.0.3), and another is due shortly. This reflects the transmitters as they switch over the digital fully and their power output increased. So all in all I’m very pleased with the feedback I’ve had, and I’m looking to get the next release out there by early December (allowing for the 2-3 week Apple approval lag!)  

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by Brian Dayton
    Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience.  My thoughts brought up more questions than answers such as: Context: -          HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? -          WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? -          WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: -          DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? -          HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.?   Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as  a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Freshen the RTS genre

    - by William Michael Thorley
    This isn't really a question, but a request for feedback. RPS (Rock, Paper, Scissors) RTS (Real Time Strategy) Demo version is out: The game is simple. It is an RTS. Why has it been made? Many if not most RTS’s are about economy and large numbers of unit types. The genre hasn’t actually developed the gameplay drastically from the very first RTS’s produced, some lesson have been learned, but the games are really very similar to how they have always been. RPS brings new gameplay to the RTS genre. Through three means: • New combat mechanics: RPS has two unique modes (as well as the old favourite) of resolving weapon fire. These change how combat happens, and make application of the correct units vital to success. From this comes the requirement to run Intel on your enemies. • Fixed Resource Economy: Each player has a fixed amount of energy, This means that there is a definite end to the game. You can attrition your enemy and try to outlast them, or try to outspend your opponent and destroy them. There is a limit to how fast ships can be built, through the generation of construction blocks, but energy is the fast limit on economy. • Game Modes: Game modes add victory conditions and new game pieces. The game is overseen by a controller which literally runs the game. Games are no longer line them up, gun them down. This means that new tactics must be played making skirmish games fresh with novel tactics without adding huge amounts of new game units to learn. I’ve produced RPS from the ground. I will be running a kickstarter in the near future, but right now I want feedback and input from the game developing community. Regarding the concepts, where RPS is going, the game modes, the combat mechanics. How it plays. RPS will give fresh gameplay to the genre so it must be right. It works over the internet or a LAN and supports single player games. Get it. Play it. Tell me about your games. Thank you Demo: https://dl.dropbox.com/u/51850113/RPS%20Playtest.zip Tutorials: https://dl.dropbox.com/u/51850113/RPSGamePlay.zip

    Read the article

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – A Quick Note on @Pluralsight Video – Call Me Maybe Developer Way

    - by pinaldave
    I write a lot about how important learning and training is.  Any of my readers will know that I think the key to success is staying current with your education and taking very opportunity to increase your “tool kit” of skills.  I hope that I have not made the impression that it is all in the employees hands to make sure they are happy and satisfied at their jobs. I also firmly believe that a good boss will make good employees.  A boss who is good at communicating,  and leading, who knows how to nip problem in the bud and allocate resources wisely will have a well-oiled machine.  This means happy employees and a great work environment. It is important to have a healthy work environment because you will not succeed without one.  Successful business will always have the type of environment that fosters creativity and has efficient employees.  A healthy environment doesn’t force employees to produce results, but allows them to progress and create the results themselves. The result of a healthy work environment is that employees will enjoy their work and then work harder.  This can bring the company more revenue, and hopefully the employees will see the result of their hard work in bonuses and raises.  However, money is important but it is certainly secondary – the important part is the dedication of the employees to their work and to their company.  This is the true key to success. Any employee who recognizes this description as their working environment should consider themselves fortunate.  They are allowed to grow and do better, and employees being treated fairly can be a rarity in this world.  One company that I believe adheres to this principle is Pluralsight – as evidenced by this fun video. I have blogged about it earlier. (check out my cameo at 0:37). It was great fun to work with the employees at Pluralsight while making this video.  They are a great bunch and clearly have a great work environment – we wouldn’t have had this much fun if not!  I have to tell you a little bit about making this video.  My wife shot it with her mobile phone, which was certainly a different but exciting experience!  It was hard to get the look of the video right, since I was trying to portray a body builder – this was a little outside of my own personal experience.  I have what I like to call a “healthy” body type, so trying to look extremely fit like some of the other “actors” in this video was a challenge – but I do hope that you all think I succeeded.  All in all, it was great fun to participate in this video and I hope to see my friends at Pluralsight again soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • At the Java DEMOgrounds - Oracle’s Java Embedded Suite 7.0

    - by Janice J. Heiss
    The Java Embedded Suite 7.0, a new, packaged offering that facilitates the creation of  applications across a wide range of  embedded systems including network appliances, healthcare devices, home gateways, and routers was demonstrated by Oleg Kostukovsky of  Oracle’s Java Embedded Global Business Unit. He presented a device-to-cloud application that relied upon a scan station connected to Java Demos throughout JavaOne. This application allows an NFC tag distributed on a handout given to attendees to be scanned to gather various kinds of data. “A raffle allows attendees to check in at six unique demos and qualify for a prize,” explained Kostukovsky. “At the same time, we are collecting data both from NFC tags and sensors. We have a sensor attached to the back of the skin page that collects temperature, humidity, light intensity, and motion data at each pod. So, all of this data is collected using an application running on a small device behind the scan station."“Analytics are performed on the network using Java Embedded Suite and technology from Oracle partners, SeeControl, Hitachi, and Globalscale,” Kostukovsky said. Next, he showed me a data visualization web site showing sensory, environmental, and scan data that is collected on the device and pushed into the cloud. The Oracle product that enabled all of this, Java Embedded Suite 7.0, was announced in late September. “You can see all kinds of data coming from the stations in real-time -- temperature, power consumption, light intensity and humidity,” explained Kostukovsky. “We can identify trends and look at sensory data and see all the trends of all the components. It uses a Java application written by a partner, SeeControl. So we are using a Java app server and web server and a database.” The Market for Java Embedded Suite 7.0 “It's mainly geared to mission-to-mission applications because the overall architecture applies across multiple industries – telematics, transportation, industrial automation, smart metering, etc. This architecture is one in which the network connects to sensory devices and then pre-analyzes the data from these devices, after which it pushes the data to the cloud for processing and visualization. So we are targeting all those industries with those combined solutions. There is a strong interest from Telcos, from carriers, who are now moving more and more to the space of providing full services for their interim applications. They are looking to deploy solutions that will provide a full service to those who are building M-to-M applications.”

    Read the article

  • Oracle Launches New Oracle Database 12c Administrator Certifications

    - by Brandye Barrington
    Today Oracle University announces the release of new Oracle Database 12c Administrator certifications. The new Oracle Database 12c certifications emphasize the foundational and advanced skills needed by Database Administrators and will prepare DBAs to leverage powerful new management and consolidation capabilities, resulting in an even more valuable credential for customers and partners. ORACLE CERTIFIED ASSOCIATE (OCA)  The Oracle Certified Associate (OCA) for Oracle Database 12c objectives measure IT professionals' mastery of day-to-day administration skills and their ability to manage the challenges they're likely to encounter on the job. This credential focuses on SQL skills, operational administration of the Oracle Database including performance and space management, and installing, patching and upgrading the Oracle Database. Earning the OCA credential requires successful completion of two exams: 1Z0-061 - Oracle Database 12c: SQL Fundamentals and 1Z0-062 - Oracle Database 12c: Installation and Administration. The OCA certification track also allows for several alternate exams which can be substituted for 1Z0-061. ORACLE CERTIFIED PROFESSIONAL (OCP) Building on the competencies in the Oracle Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. The OCP credential focuses on developing and implementing backup and recovery strategies, designing consolidation strategies to exploit multitenant container and pluggable databases, and thorough understanding how CDB/PDBs fit into the DBaaS cloud-computing model. Today, Oracle is releasing 1Z0-060 - Upgrade to Oracle Database 12c, which allows Oracle Certified Professionals with credentials in Oracle 9i, Oracle Database 10g or Oracle Database 11g to upgrade to Oracle Database 12c with a single exam. The upgrade exam focuses on designing consolidation strategies to exploit multitenant container and pluggable databases, implementing Oracle 12c feature-rich ILM support, optimizing SQL execution using dynamic swapping of sub plans, implementing real-time data redaction within databases, as well as exploiting many additional performance, backup and recovery, security and partitioning enhancements. The exam also includes a thorough review of core DBA skills. Visit the OCP certification track for more details on the new upgrade exam as well as alternate certification paths. ORACLE CERTIFIED MASTER (OCM) The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. Further information on the 12c OCM level will be announced as exam development concludes. To date, there have been more than 1.6 million Oracle certifications granted worldwide. Explore these certification tracks, exam requirements and objectives, and start toward earning your exciting new Oracle Database 12c certification credentials from Oracle.

    Read the article

  • Event Processed

    - by Antony Reynolds
    Installing Oracle Event Processing 11g Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Developer Install The developer install requires several steps (documentation) Download required software Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios) Install Eclipse Unzip the download into the desired directory Start Eclipse Add Oracle CEP Repository in Eclipse http://download.oracle.com/technology/software/cep-ide/11/ Install Oracle CEP Tools for Eclipse 3.6 You may need to set the proxy if behind a firewall. Modify eclipse.ini If using Windows edit with wordpad rather than notepad Point to 1.6 JVM Insert following lines before –vmargs -vm \PATH_TO_1.6_JDK\jre\bin\javaw.exe Increase PermGen Memory Insert following line at end of file -XX:MaxPermSize=256M Restart eclipse and verify that everything is installed as expected. Server install The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Voila The Deed Is Done With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server. Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002. With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation. Much easier than organizing a Family History Day!

    Read the article

  • Develop DBA skills with MySQL for Database Administrators course

    - by Antoinette O'Sullivan
    MySQL is the world's number one open source database and the number one database for the Web. Join top companies by developing your MySQL Database Administrator skills. The MySQL for Database Administrators course is for DBAs and other database professionals who want to install the MySQL Server, set up replication and security, perform database backups and performance tuning, and protect MySQL databases. You can take this 5 day course as Training on Demand: Start training within 24 hours of registration. You will follow the lecture material via streaming video and perform hands-on activities at a date and time that suits you. Live-Virtual Event:  Take this instructor-led course from your own desk. Choose from the 19 events currently on the schedule and find an event that suits you in terms of timezone and date. In-Class Event: Travel to an education center. Here is a sample of events on the schedule:    Location  Date  Delivery Language  Mechelen, Belgium  25 February 2013  English  London, England  26 November 2012  English  Nice, France  3 December 2012  French  Paris, France  11 February 2013  French  Budapest, Hungary  26 November 2012  Hungarian  Belfast, Ireland  24 June 2013  English  Milan, Italy  14 January 2013  Japanese  Rome, Italy  18 February 2013  Japanese  Amsterdam, Netherlands  24 June 2013  Dutch  Nieuwegein, Netherlands  8 April 2013  Dutch  Warsaw, Poland  10 December 2012  Polish  Lisbon, Portugal  21 January 2013  European Portugese  Porto, Portugal  21 January 2013  European Portugese  Barcelona, Spain  4 February 2013  Spanish  Madrid, Spain  21 January 2013  Spanish  Nairobi, Kenya  26 November 2012  English  Johannesburg, South Africa  9 December 2013  English  Tokyo, Japan  10 December 2012  Japanese  Singapore  28 January 2013  English  Brisbane, Australia  10 December 2012  English  Edmonton, Canada  7 January 2013  English  Montreal, Canada  28 January 2013  English  Ottawa, Canada  28 January 2013  English  Toronto, Canada  28 January 2013  English  Vancouver, Canada  7 January 2013  English  Mexico City, Mexico  10 December 2012  Spanish  Sao Paolo, Brazil  10 December 2012  Brazilian Portugese For more information on this course or on other courses on the authentic MySQL Curriculum, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Walking to the North Pole to raise money to protect children from cruelty.

    - by jessica.ebbelaar
    Hi, my name is Luca. I joined Oracle in 2005 and I am currently working as a Dell EMEA Channel Manager UK, Ireland and Iberia and I am responsible for the Oracle Dell relationship for the above 3 countries. On the 31st of March 2011 I will set out to complete the ultimate challenge. I will walk and ski across the frozen Arctic to the Top of The World: the GEOGRAPHIC North Pole. While dragging all my supplies over 60 Nautical miles of moving sea ice, in temperatures as low as minus 30 degrees Celsius. I will spend 8 to 10 days preparing, working, living and travelling to the North Pole to 90 degree north. In November I spent a full week of training for this trip.( watch my video). This gave me the opportunity to meet the rest of the team, testing all the gear and carrying an 18inch tyre around the country side for 8 hours per day. I am honored to embark this challenging journey to support the National Society for the Prevention of Cruelty to Children (NSPCC). The NSPCC helped more than 750,000 young people to speak out for the first time about abuse they had suffered. I am a firm believer that in order to build a stronger, healthier and wiser society we need to support and help future generations from the beginning of their life journey. This is why cruelty to children must stop. FULL STOP.   Through Virgin Money Giving, you can sponsor me and donations will be quickly processed and passed to NSPCC. Virgin Money Giving is a non-profit organization and will claim gift aid on a charity's behalf where the donor is eligible for this. If you are a UK tax payer please don't forget to select Gift Aid. Gift Aid is great because it means charities get extra money added to their donations at no extra cost to the donor. For every £1 donated, the charity currently receives £1.28 when you add Gift Aid. Anyone who would like to find out more can visit my Facebook page ‘Luca North Pole charity fundraising trip’ I really appreciate all your support and thank you for supporting the NSPCC. Tags van Technorati: Channel Manager,challenge,Arctic,North Pole,NSPCC,cruelty to children,Luca North Pole charity fundraising trip. If fou have any questions related to this article contact [email protected].

    Read the article

  • Mini Book Review of IronRuby Unleashed by Shay Friedman

    - by Eric Nelson
    When I get some time (and hell starts to look a little chilly) I would love to do a more detailed review. But I wanted to get something “out there” as I really like this book and reviews of it seem a little thin on the ground. In brief: Is it a good book? Yes Would I recommend this book to a .NET developer who was new to Ruby? Yes (This is me by the way) Would I recommend this book to a Ruby developer who was new to .NET ? Yes Would I recommend this book to a developer who sometimes does Ruby and sometimes does .NET? Yes Would I recommend this book to a developer new to .NET and new to Ruby? Yes The above demonstrates how well balanced this book is (IMHO). What I like about it: Its assumes pretty much no knowledge of IronRuby or .NET. All it asks is that you are a developer interested in IronRuby. Yet it manages to cover off the topics in a good degree of detail. If you are a Ruby developer you skip Part 2, if you are a .NET developer you skip some of Part 1 and whizz through the short intros to the individual technologies such as WPF. It is definitely not a “lets makes the manual look pretty” book – this is original content thoughtfully written and presented. It is pretty comprehensive – in 500 pages it packs in  Intro to IronRuby Intro to .NET Intro to Ruby Using IronRuby with Windows Forms, ASP.NET, WPF, Silverlight etc Getting Rails working with IronRuby Unit testing with IronRuby – which I think is an excellent way for a .NET developer to start using IronRuby Embedding IronRuby in a .NET app  - another interesting “first step” for a .NET developer What I didn’t like: Err… nothing yet. Ok, If I am being picky then the start of chapter 2 irked me a little as it went through the history of .NET. “The first version [of the .NET Framework] wasn’t that great”.  Felt pretty good to me compared to Java and C++ development at the time :-) Buy on Amazon UK | Buy on Amazon USA Related Links: Posts from the author Shay Friedman on IronRuby Guest Post: What's IronRuby, and how do I put it on Rails? Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Executable Resumes

    - by Liam McLennan
    Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The highest quality code that the person is able to produce. That the person is sufficiently motivated to produce something of value in their own time. That the person loves their craft. The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring?

    Read the article

  • SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS

    - by pinaldave
    Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. The step here is to change the format of Excel Cell to Text from Numbers and that will preserve the value of the with leading or trailing Zeros in Excel. However, I assume this is done for display purpose only because once you convert column to Text you may find it difficult to do numeric operations over the column for example Aggregation, Average etc. If you need to do the same you should either convert the columns back to Numeric in Excel or do the process in Database and export the same value as along with it as well. However, I have seen in requirement in the real world where the user has to have a numeric value with leading Zero values in it for display purpose. Here is my suggestion, instead of manipulating numeric value in the database and converting it to character value the ideal thing to do is to store it as a numeric value only in the database. Whatever changes you want to do for display purpose should be handled at the time of the display using the format function of SQL or Application Language. Honestly, database is data layer and presentation is presentation layer – they are two different things and if possible they should not be mixed. If due to any reason you cannot follow above advise and you need is to have append leading zeros in the database only here are two of my previous articles I suggest you to refer them. I am open to learn new tricks as these articles are almost three years old. Please share your opinion and suggestions in the comments area. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Excel

    Read the article

  • View and Flip Between Firefox Tabs in 3D

    - by Asian Angel
    Are you tired of the default tab switching style in Firefox? Then get ready to enjoy a more visually pleasing 3D experience with the FoxTab extension. Using FoxTab As soon as you have the extension installed, you will see a new toolbar button available beside the address bar. Before going further you may want to look through the viewing styles available in the lower right corner. Note: You can choose to have the FoxTab button appear in the status bar if preferred or use the keyboard (i.e. F12) by itself to launch FoxTab. The grid view with an angled 3D setting. The page flow view with a more frontal look. If the default background color is not to your liking then you can easily change to a new color or insert a background image. After choosing a new background color, making a few adjustments in the options, and opening more tabs things look very nice using the grid viewing style. Followed by the carousel viewing style. And finally the wall viewing style. You can also set up a top sites page using your favorite viewing style. To add a page to the top sites group right click within the webpage and select Add To Top Sites. Just like that your new selection is added in. Keep in mind that we were not able to move/switch positions in the grid during our tests. Options The extension has plenty of options and settings to help you customize FoxTab to your liking. Conclusion FoxTab adds visually pleasing 3D tab switching to Firefox for anyone who loves eye candy and a touch of fun while browsing. Links Download the FoxTab extension (Mozilla Add-ons) Visit the FoxTab Homepage Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Quick Hits: 11 Firefox Tab How-TosQuick Tip: Save Windows and Tabs When Restarting FirefoxMake Firefox Use Multiple Rows of TabsQuick Tip: Use Tab Characters in Textarea Boxes in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >