Search Results

Search found 47392 results on 1896 pages for 'full text indexing'.

Page 356/1896 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Times they are a changing…

    - by Jonathan Kehayias
    If you follow me on twitter ( @SQLSarg ), you already know that this has been a week of big announcements for me. Wednesday afternoon Paul Randal ( Blog | Twitter ) announced that I joined SQLskills.com as a full time employee, and Thursday afternoon, Joe Sack ( Blog | Twitter ) announced that I passed the Microsoft Certified Masters for SQL Server 2008 . As a part of my transition to working for SQLskills.com full time, I will be changing blogs over to the SQLskills.com site. You can read about...(read more)

    Read the article

  • Gparted Partition Mount Points Alternate Between 2 Physical Disk Drives

    - by California Ken
    I'm running Ubuntu Server 14.04 on a system with 2 physical disk drives. I am frequently seeing mount errors on startup. When I check the drive partitions using GPARTED, I see that my two "non-system created" data partitions have the wrong disk assignments (i.e. sda1 vs sdb1) or visa-versa. If I hand edit /etc/fstab to match GPARTED, the system will boot error free one time. On the second restart I will get the "serious mount problem" error for the 2 data partitions and when I check GPARTED, the disk assignments have changed again (again, GPARTED and fstab don't match). A listing of my /etc/fstab is: /etc/fstab: static file system information. # Use 'blkid' to print the universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added and removed. See fstab(5). # / was on /dev/sdb2 during installation UUID=766a06a4-e5af-484a-adf0-fa1e88da7212 / ext4 errors=remount-ro,user_xattr,acl,barrier=1 0 1 swap was on /dev/sda6 during installation UUID=8c42f835-ead3-43fb-88d8-196f5dfc3aa7 none swap sw 0 0 swap was on /dev/sdb3 during installation UUID=2214deec-ba98-47da-aea7-4e46998f3e57 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 /media/ken/Linux-Data ext3 defaults 0 2 /dev/sda5 /media/ken/Data2 ext4 defaults 0 2 The device designations in the last 2 lines are the ones in question. The fstab entries to NOT change between system restarts but the mount points in the GPARTED display do. Does anyone have a fix for this? Thanks Mr. Young and Mr Gedak, Following is my fstab file and two blkid outputs. The fstab output is correct. The first blkid output was after a reboot and is WRONG! The sda and sdb device partition data is reversed. The 2nd blkid output was after a second reboot (fstab not changed). It shows the sda and adb partition data CORRECTLY. I didn't see any duplicate UUIDs. Does anyone have any idea why the GPARTED and blkid outputs alternate on consecutive reboots? The alternating partition data is real since when the partition assignments are reversed, the boot sequence halts with disk mounting errers (I have to press "s" to skip the mounts). Thanks again. Ken I copied the contents of a text file showing my fstab and 2 blkid outputs. The text file contents show up in the text entry box but does not appear in the main body of the question. Is there a way I can attach the text file or edit this question so that the text is displayed for question viewers?

    Read the article

  • Creating Wizard in ASP.NET MVC (Part 3 - jQuery)

    - by bipinjoshi
    In Part 1 and Part 2 of this article series you developed a wizard in an ASP.NET MVC application using full page postback and Ajax helper respectively. In this final part of this series you will develop a client side wizard using jQuery. The navigation between various wizard steps (Next, Previous) happens without any postback (neither full nor partial). The only step that causes form submission to the server is clicking on the Finish wizard button.http://www.binaryintellect.net/articles/d278e8aa-3f37-40c5-92a2-74e65b1b5653.aspx 

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • How to determine if a package is a meta-package from the command line?

    - by cirosantilli
    How can I determine if a package is a meta-package from the command line, possibly via apt-get, aptitude or apt-cache? I have tried: apt-cache show texlive-full apt-cache showpkg texlive-full but the only way I can tell this package is meta is by reading the "en-description" field. Is there a more automatic way of doing this, that will give me a yes/no response, or at least have a field such as then "en-description" dedicated to this?

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to get the revision history of a branch with bzrlib

    - by David Planella
    I'm trying to get a list of committers to a bzr branch. I know I can get it through the command line with something along these lines: bzr log -n0 | grep committer | sed -e 's/^[[:space:]]*committer: //' | uniq However, I'd like to get that list programmatically with bzrlib. After having looked at the bzrlib documentation, I can't manage to find out how I would even get the full list of revisions from my branch. Any hints on how to get the full history of revisions from a branch with bzrlib, or ultimately, the list of committers?

    Read the article

  • Proactive Database Index Creation

    Indexes help your application find your data quickly and provide users with a well performing application, while minimizing server resources. This article discusses indexing guidelines related to join tables and covering indexes.

    Read the article

  • 24HOP & SQLRally News

    - by NeilHambly
    24 Hours of PASS The Spring 2012 SQLPASS 24 hours of PASS event is a WHOLE DAY {Yes 24 hours’ worth} of SQL session exploding right onto computer screen’s near you When: 21st March 2012 - 1 session every hour on the hour for a full 24 hours The full agenda contains all the exciting details for each of the sessions & the speakers delivering the session But just in case, the ones you can't make it too on the day, you can watch them at a later time But you'll be attending mine LIVE of course...(read more)

    Read the article

  • Using DB_PARAMS to Tune the EP_LOAD_SALES Performance

    - by user702295
    The DB_PARAMS table can be used to tune the EP_LOAD_SALES performance.  The AWR report supplied shows 16 CPUs so I imaging that you can run with 8 or more parallel threads.  This can be done by setting the following DB_PARAMS parameters.  Note that most of parameter changes are just changing a 2 or 4 into an 8: DBHintEp_Load_SalesUseParallel = TRUE DBHintEp_Load_SalesUseParallelDML = TRUE DBHintEp_Load_SalesInsertErr = + parallel(@T_SRC_SALES@ 8) full(@T_SRC_SALES@) DBHintEp_Load_SalesInsertLd  = + parallel(@T_SRC_SALES@ 8) DBHintEp_Load_SalesMergeSALES_DATA = + parallel(@T_SRC_SALES_LD@ 8) full(@T_SRC_SALES_LD@) DBHintMdp_AddUpdateIs_Fictive0SD = + parallel(s 8 ) DBHintMdp_AddUpdateIs_Fictive2SD = + parallel(s 8 )

    Read the article

  • Unindex google code svn repository content from google index

    - by matcheek
    I developed a small web site and saved the code to google code repository. Everything has been running smoothly for a while until results from google code svn repository started showing up before the results from the actual website. Is there any way I could stop google from indexing google code repository content or at least make its rank lower than the web site? I am not talking sophisticated seo techniques but rather some simple settings if there are any.

    Read the article

  • How can I tell if I am out of inotify watches?

    - by Jorge Castro
    I use an application that consumes inotify watches. I've already set fs.inotify.max_user_watches=32768 in /etc/sysctl.conf but last night the application stopped indexing unless I ran it manually, which leads me to suspect I am out of watches. Since I don't know what the trade off is when I increase this number (does it consume more RAM?), I don't know if I should just increase this number, so I'd like to know if there's a way I can tell if it's using all these watches and what the tradeoffs might be for increasing it.

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • String contains trailing zeroes when converted from decimal [migrated]

    - by Locke
    I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that fixing the issue is easy enough. I just can't figure out why it is happening in the first place. I have a WinForms program written in VB.NET that is displaying a subset of data. It contains a few labels that show numeric values (the .Text property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in C#. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties. So, the short version is: WinForm requests a new Foo from the DLL. DLL creates object Foo. DLL calls webservice, which returns SomeOtherFoo. //Both Foo.Bar1 and Foo.Bar2 are decimals Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to "2.9000" Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D DLL returns Foo to WinForm. WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D So, what's the quirk? WinForm.lblMockLabelName1.Text displays as "2.9000", whereas WinForm.lblMockLabelname2.Text displays as "2.9". Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that decimal.Parse(someDecimalString).ToString() would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing). At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place.

    Read the article

  • [New England] SQL Saturday 71 - April 2 - Boston Area

    - by Adam Machanic
    April in the Boston area means many things. The Boston Marathon, the beginning of baseball season, and -- hopefully -- a bit of a respite from the ridiculously cold and snowy winter we've been having. This April will mean one more thing: A full-day, free SQL Server event featuring 30 top-notch sessions . SQL Saturday 71 will be the third full-day event in the area in as many years, and is shaping up to be the best yet. For the past several months I've been working and planning in conjunction with...(read more)

    Read the article

  • NuGet JustMock

    - by mehfuzh
    As most of us already know JustMock got  a free edition. The free edition is not a stripped down of the features of the full edition but I would rather say its a strip down of the type you can mock. Technically, free version runs on  proxy as full version runs on proxy + profiler. In full version, It switches to profiler when you are mocking final methods or sealed class or anything else that can not be done using inheritance. Like in full version you can mock non public methods , in free version you can still do it but it has to be virtual for protected or must be done through InternalsVisibleTo attribute for internal virtual methods (If you have access to the source and can apply the attribute). Now, you can get a copy of free edition from the product page. Install it and off you go. But it is also exposed to NuGet. Those of you are not familiar with NuGet (that will be odd). But still NuGet is the centralized package manager from Microsoft that cuts the workflow of manual inclusion of  libraries in your project. I think NuGet in future will limit the scope of  “.vsi” packages and installers because of its ease (except in some cases). Its similar to ruby gems. In ruby, virtually you can install any library in this way “gems  install <target_library>” and you are off to go. It will check the dependencies, install them or less prompt with the steps you need to do.   Now sticking to the post, to get started you first need to install NuGet package manager. Once you have completed the step pressing “Ctrl + W, Ctrl + Z” it will bring up an console like one below:   Once you are here, you just have to type “install-package justmock” Next, it will should print the confirmation when the installation is complete: Moving to visual studio solution explorer, you will now see:   Finally, NuGet is still in its early ages and steps that are shown here may not remain the same in coming releases, but feel free to enjoy what is out there right now. Regarding JustMock free edition, there is a nice post by Phil Japikse at Introducing JustMock Free Edition. I think its worth checking if not already.   Have fun and happy holidays!

    Read the article

  • library for octree or kdtree

    - by Will
    Are there any robust performant libraries for indexing objects? It would need frustum culling and visiting objects hit by a ray as well as neighbourhood searches. I can find lots of articles showing the math for the component parts, often as algebra rather than simple C, but nothing that puts it all together (apart from perhaps Ogre, which has rather more involved and isn't so stand-alone). Surely hobby game makers don't all have to make their own octrees? (Python or C/C++ w/bindings preferred)

    Read the article

  • MySQL Connect - Save The Date!

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Oracle today announced that it will hold the MySQL Connect Conference on September 29 and 30 in San Francisco! You can read the Press Release here. MySQL Connect will be jam-packed with technical sessions, hands-on labs and Birds of a Feather (BOF) sessions delivered by MySQL community members, users, customers and MySQL engineers from Oracle. The event is a unique opportunity to learn about the latest MySQL features, discuss product roadmaps, and connect directly with the engineers behind the latest MySQL code. The conference will include six tracks: Performance and Scalability, High Availability, Cloud Computing, Architecture and Design, Database Administration, and Application Development. The call for papers will open on April 16, 2012 for approximately three weeks. MySQL users and community members are encouraged to submit session proposals. Start thinking about your proposals! Registration will also open on April 16. @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.pressBullet, li.pressBullet, div.pressBullet { margin: 0cm 0cm 0.0001pt 36pt; text-indent: -18pt; font-size: 11pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }

    Read the article

  • Seven SEO Mistakes to Avoid

    SEO seems relatively easy at first, but danger lurks around every corner. One false step and all of your SEO work vanishes into a de-indexing or the dreaded Google sandbox. In this article, we'll explore the seven SEO mistakes to avoid.

    Read the article

  • Quick run through of the WP7 Developer Tools January 2011

    - by mbcrump
    In case you haven’t heard the latest WP7 Developers Tool update was released yesterday and contains a few goodies. First you need to go and grab the bits here. You can install them in any order, but I installed the WindowsPhoneDeveloperResources_en-US_Patch1.msp first. Then the VS10-KB2486994-x86.exe. They install silently. In other words, you would need to check Programs and Features and look in Installed Updates to see if they installed successfully. Like the screenshot below: Once you get them installed you can try out a few new features. Like Copy and Paste. Just fire up your application and put a TextBox on it and Select the Text and you will have the option highlighted in red above the text. Once you select it you will have the option to paste it. (see red rectangle below). Another feature is the Windows Phone Capability Detection Tool – This tool detects the phone capabilities used by your application. This will prevent you from submitting an app to the marketplace that says it uses x feature but really does not. How do you use it? Well navigate out to either directory: %ProgramFiles%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect %ProgramFiles (x86)%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect and run the following command: CapabilityDetection.exe Rules.xml YOURWP7XAPFILEOUTPUTDIRECTORY So, in my example you will see my app only requires the ID_CAP_MICROPHONE. Let’s see what the WmAppManifest.xml says in our WP7 Project: Whoa! That’s a lot of extra stuff we don’t need. We can delete unused capabilities safely now. Some of the other fixes are: (Copied straight from Microsoft) Fixes a text selection bug in pivot and panorama controls. In applications that have pivot or panorama controls that contain text boxes, users can unintentionally change panes when trying to copy text. To prevent this problem, open your application, recompile it, and then resubmit it to the Windows Phone Marketplace. Windows Phone Connect Tool – Allows you to connect your phone to a PC when Zune® software is not running and debug applications that use media APIs. For more information, see How to: Use the Connect Tool. Updated Bing Maps Silverlight Control – Includes improvements to gesture performance when using Bing™ Maps Silverlight® Control. Windows Phone Developer Tools Fix allowing deployment of XAP files over 64 MB in size to physical phone devices for testing and debugging. That’s pretty much it. Thanks again for reading my blog!  Subscribe to my feed CodeProject

    Read the article

  • Article Submissions - SEO

    Search engine optimization helps build the ranking of a website when popular search engines are indexing its pages. Search engines index web pages fairly frequently.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Data searches that leverage existing indexes

    Recent installments of our SQL Server 2005 Express Edition series have been discussing its implementation of Full Text Indexing. This article focuses on data searches, which leverage existing indexes, taking into account such features as noise words and thesaurus files.

    Read the article

  • Microsoft BI Conference 2011 in Lisbon

    - by AlbertoFerrari
    Anyone interested in BI from Portugal or Spain should not miss the Microsoft BI Conference 2011 in Lisbon : one full day ( March, 25, 2011 ) with three tracks on Business Intelligence: Decision Makers BI pros Intro to BI. I am going to present two sessions on PowerPivot: one is a nice deep dive into DAX for BI pros, the other is about self service BI for decision makers. Titles and the complete agenda will be published in the next days, but I suggest to save the date. The full event is free and it...(read more)

    Read the article

  • Analysing Indexes - count *

    - by GrumpyOldDBA
    In my presentations on indexing I have always said that you should explore the advantages of covering your clustered index with a secondary index. In circumstances where you might want to just return values form the PK ( assuming it's your clustered index ) a secondary index will be more efficient especially when the row size is wide. Any operation on a clustered index will always return the entire row, so select ID from dbo.mytable where ID is the clustered PK integer will return not just the...(read more)

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >