Search Results

Search found 9311 results on 373 pages for 'cache dependency'.

Page 178/373 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Stir Trek 2: Iron Man Edition

    Next month (7 May 2010) Ill be presenting at the second annual Stir Trek event in Columbus, Ohio. Stir Trek (so named because last year its themes mixed MIX and the opening of the Star Trek movie) is a very cool local event.  Its a lot of fun to present at and to attend, because of its unique venue: a movie theater.  And whats more, the cost of admission includes a private showing of a new movie (this year: Iron Man 2).  The sessions cover a variety of topics (not just Microsoft), similar to CodeMash.  The event recently sold out, so Im not telling you all of this so that you can go and sign up (though I believe you can get on the waitlist still).  Rather, this is pretty much just an excuse for me to talk about my session as a way to organize my thoughts. Im actually speaking on the same topic as I did last year, but the key difference is that last year the subject of my session was nowhere close to being released, and this year, its RTM (as of last week).  Thats right, the topic is Whats New in ASP.NET 4 how did you guess? Whats New in ASP.NET 4 So, just what *is* new in ASP.NET 4?  Hasnt Microsoft been spending all of their time on Silverlight and MVC the last few years?  Well, actually, no.  There are some pretty cool things that are now available out of the box in ASP.NET 4.  Theres a nice summary of the new features on MSDN.  Here is my super-brief summary: Extensible Output Caching use providers like distributed cache or file system cache Preload Web Applications IIS 7.5 only; avoid the startup tax for your site by preloading it. Permanent (301) Redirects are finally supported by the framework in one line of code, not two. Session State Compression Can speed up session access in a web farm environment.  Test it to see. Web Forms Features several of which mirror ASP.NET MVC advantages (viewstate, control ids) Set Meta Keywords and Description easily Granular and inheritable control over ViewState Support for more recent browsers and devices Routing (introduced in 3.5 SP1) some new features and zero web.config changes required Client ID control makes client manipulation of DOM elements much simpler. Row Selection in Data Controls fixed (id based, not row index based) FormView and ListView enhancements (less markup, more CSS compliant) New QueryExtender control makes filtering data from other Data Source Controls easy More CSS and Accessibility support Reduction of Tables and more control over output for other template controls Dynamic Data enhancements More control templates Support for inheritance in the Data Model New Attributes ASP.NET Chart Control (learn more) Lots of IDE enhancements Web Deploy tool My session will cover many but not all of these features.  Theres only an hour (3pm-4pm), and its right before the prize giveaway and movie showing, so Ill be moving quickly and most likely answering questions off-line via email after the talk. Hope to see you there! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Junior software developer - How to understand web aplications in depth?

    - by nat_gr
    I am currently a junior developer in web applications and specifically in asp.net mvc technology. My problem is that the c# senior developer in the company has no experience with this technology and I try to learn without any guidance. I went through all tutorials (e.g music store), codeplex projects and also read pro asp.net mvc 4. However, most of the examples are about crud and e-commerce applications. What I don't understand is how dependency injection fits in web applications (I have realized that is not only used for facilitating unit testing) or when i should use a custom model binder or how to model the business logic when there is already a database schema in place. I read the forum quite often and it would very helpful if some experienced developers could give me an insight about how to proceed. Do I need to read some books to understand the overall idea behind web applications? And what kind of application should I start building myself - I don't think it would be useful to create similar examples with the tutorials.

    Read the article

  • Image Magic Make Fails - PHP extension

    - by Kyle Adams
    So I was doing the following: sudo apt-get install php-pear php5-dev sudo apt-get install imagemagick libmagickwand-dev sudo pecl install imagick It all works till I get the error: make: *** [imagick_class.lo] Error 1 ERROR: `make' failed Which according to blog posts and forms is because of libmagick9-dev, how ever when trying to install this I get: sudo apt-get install libmagick9-dev Reading package lists... Done Building dependency tree Reading state information... Done Package libmagick9-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: graphicsmagick-libmagick-dev-compat E: Package 'libmagick9-dev' has no installation candidate Thoughts?

    Read the article

  • Sound unavailable every other session

    - by Oxwivi
    On my desktop running Oneiric, sometimes there's no sound at all, but it would work normally at other times. My setup is built ground-up from minimal Ubuntu, but since sounds work fine, at times, I don't think it is a backend dependency issue. When it works, it will play anything from regular audio files and movies to YouTube Flash players. For the record, I installed LXDE with the alsa-base and alsa-utils packages which are the only audio-related dependencies for the lubuntu-desktop. For a while, I also used persistent Oneiric live USB, and do not recall any sound issues. It's one thing to not play sound entirely, but playing sound only under some very unclear circumstances is something else. Please help me diagnose it.

    Read the article

  • Can I have a .desktop Launcher for both Python2 and Python3 depending on version installed?

    - by Takkat
    After very few issues only I moved my application from Python2 to Python3 making sure it will still run with Python 2.7, and hence has python = 2.7 as dependency only. This was mainly done because Python3, and some dependencies are not installed in a default 12.04 LTS, and I do not want my users to have to install all Python3 only to run my script. When I create an appname.desktop launcher I now need to decide if it starts my application using Python2, or Python3 like EXEC=python /path/app.py EXEC=python3 /path/app.py But what I would like it to do is to Launch the application with the Python3 interpreter if Python3 is installed. Otherwise use Python2 if Python3 is not installed. How can this be done? Do I need to tell it in my package installation script, or can I have a launcher which can handle both (in case people install Python3 after they had installed my script)?

    Read the article

  • How do I install GMSH?

    - by Steph Bredenhann
    I am trying to install Gmsh in 12.04 x64: xxx@sjb-linux:/320/installslinux/gmsh$ sudo apt-get install gmsh Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gmsh : Depends: libmed1 (>= 3.0.3) but it is not going to be installed E: Unable to correct problems, you have held broken packages. xxx@sjb-linux:/320/installslinux/gmsh$ I have now tried all the advice I could get sudo apt-get -f install sudo apt-get clean with no success, these commands report absolutely no problem. I'll appreciate help.

    Read the article

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Use adapter pattern for coupled classes

    - by kaiseroskilo
    I need (for unit testing purposes) to create adapters for external library classes.ExchangeService and ContactsFolder are Microsoft's implementations in its' EWS library. So I created my adapters that implement my interfaces, but it seems that contactsFolder has a dependency for ExchangeService in its' constructor. The problem is that I cannot instantiate ContactsFolderAdapter without somehow accessing the actual ExchangeService instance (I see only ExchangeServiceAdapter in scope). Is there a better pattern for this that retains the adapter classes? Or should I "infect" ExchangeServiceAdapter with some kind of GetActualObject method?

    Read the article

  • Error trying to install the Java SDK

    - by Ray
    I need to install the Java 6 SDK, but after running this: sudo apt-get install sun-java6-jdk sun-java6-jre sun-java6-source I end up with this: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: sun-java6-jdk : Depends: sun-java6-bin (>= 6.26-1lucid1) but it is not going to be installed sun-java6-jre : Depends: sun-java6-bin (>= 6.26-1lucid1) but it is not going to be installed or ia32-sun-java6-bin (>= 6.26-1lucid1) but it is not installable Recommends: gsfonts-x11 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I'm quite new to Ubuntu and need the packages for my course. I guess they've become corrupted but, how can I fix this?

    Read the article

  • Battery Indicator Missing

    - by Edwin
    Duplicate of: No Battery Status Icon I have just recently upgraded from 11.04 "Natty Narwhal" to 11.10 "Oneiric Ocelot" on my laptop, but do not have any battery indicator (which should be located between the volume and date indicator in Unity). I have already run sudo apt-get install indicator-power and got the following output: Reading package lists... Done Building dependency tree Reading state information... Done indicator-power is already the newest version. indicator-power set to manually installed. The following packages were automatically installed and are no longer required: //list of packages Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. In addition, I have already tried reinstalling, but still don't have a battery indicator. What else can I do?

    Read the article

  • subprocess installed post-installation script returned error exit code 1

    - by Laura quintero
    I had installed snort on ubuntu 11.04 and uninstall it because I had problems, to reinstall it leaves a problem: Reading package lists ... done Building dependency tree Reading state information ... done Calculating upgrade ... ready 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. 0 B will be used for additional disk space after this operation. Do you want to continue [S / n]? s Configuring snort (2.8.5.2-9.1) ... * Stopping Network Intrusion Detection System snort * - No running snort instance found * Starting Network Intrusion Detection System snort [fail] invoke-rc.d: initscript snort, action "start" failed. dpkg: error processing snort (- configure): subprocess installed post-installation script returned error exit code 1 Errors were encountered while processing: snort E: Sub-process / usr / bin / dpkg Returned an error code (1) any solution? Commands allready used apt-get clean apt-get remove snort sudo apt-get dist-upgrade dpkg - remove - force-remove-reinstreq snort and nothing.

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

  • What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS?

    - by Stephen Myall
    I am planning on doing a fresh install of 12.04 LTS next weekend (I am currently on Beta) and want to know what is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? I have been reviewing all the Q&A on AU and the reason for asking this specific question is that I maybe be missing some dependencies from my planned coding. In a previous AU question relating to 11.10 (here) NOT 12.04 the accepted response stated that this was a dependency. sudo apt-get install ia32-libs I was also receiving conflicting advice on other websites which put doubts on what the correct approach was.

    Read the article

  • Can't untar Testdisk 6.14 using live CD

    - by Orestes
    I'm using an Ubuntu LiveCD right now and need to recover files from a (Windows 7) partition. I've read about TestDisk and tried downloading it and untaring it but: testdisk-6.14-WIP.linux26.tar.bz2: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now I don't know why it doesn't work (noob). I tried using sudo apt-get install testdisk but: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package testdisk so.... HELP :D

    Read the article

  • How to add support for the JPEG image format

    - by Samir Sabri
    After installing Imagemagick, I've tested it with jpg image, like this: identify 1.jpg But, I got this result: identify: no decode delegate for this image format `1.jpg' @ error/constitute.c/ReadImage/550. Then, I tried to add support for JPEG format by: yum install libjpeg libjpeg-devel but, I got: Setting up Install Process No package libjpeg available. No package libjpeg-devel available. Nothing to do I thought I need to update the apt-get, I did: apt-get install libjpeg libjpeg-devel but, I got: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libjpeg E: Unable to locate package libjpeg-devel Is there an easy way to get those libraries installed ? I am using Ubuntu 12.04.

    Read the article

  • Cannot install Android 2.3 libs dies to missing ia32-libs-multiarch

    - by Enrique
    I need to get my box up to par or android development, but cannot get ia32-libs to install for the life of me. Can anyone help? The error Android's tool gave me was Stopping ADB server failed (code -1) and after a bit of investigation I found that I needed to install the ia32-libs which from my understanding is a pain. Ubuntu 12.04 (x64) xxx@xxx:~$ sudo apt-get -f install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages.

    Read the article

  • How can I install binutils from source?

    - by sven
    uname -a: Linux ubuntu 3.5.0-23-generic #35-Ubuntu SMP Thu Jan 24 13:05:29 UTC 2013 i686 i686 i686 GNU/Linux root@ubuntu:/home/ubuntu# apt-get source binutils Reading package lists... Done Building dependency tree Reading state information... Done E: Ignore unavailable target release 'stable' of package 'binutils' E: Unable to find a source package for I did apt-get update before typing the command. How can I get binutils? I am using Ubuntu 12.10. I am following the instruction on https://wiki.ubuntu.com/Toolchain/Crosscompilers/ARMEABIToolchain I am stuck at the first line. I did sudo add-apt-repository ppa:germia/archive3 previously, however I got some errors then, I did sudo add-apt-repository --remove ppa:germia/archive3 to undo the setting. I wonder if my problem is related wtih this PPA?

    Read the article

  • Attached Property port of my Window Close Behavior

    - by Reed
    Nishant Sivakumar just posted a nice article on The Code Project.  It is a port of the MVVM-friendly Blend Behavior I wrote about in a previous article to WPF using Attached Properties. While similar to the WindowCloseBehavior code I posted on the Expression Code Gallery, Nishant Sivakumar’s version works in WPF without taking a dependency on the Expression Blend SDK. I highly recommend reading this article: Handling a Window’s Closed and Closing Events in the View-Model.  It is a very nice alternative approach to this common problem in MVVM.

    Read the article

  • Fedora login gone after Ubuntu updates on a dual boot

    - by andrew
    After a software update for Ubuntu, my dual boot with Fedora will not show Fedora in the start menu. It just boots into Ubuntu and when I hold Shift and boot, it only has Ubuntu in the list. I have tried the post about installing grub-customizer but when I run that, I get Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package grub-customizer I cannot find any other way to fix this problem. I am a complete newbie to Linux.

    Read the article

  • Unlock the Java EE 6 Platform using NetBeans 7.1

    - by arungupta
    NetBeans IDE provide tools, templates, and code generators that can be used for the specifications that are part of the Java EE 6 Platform. In a recent article Geertjan builds a simple end-to-end application using the standard Model-View-Controller architecture. It uses Java Persistence API 2, Servlets 3, JavaServer Faces 2, Enterprise Java Beans 3.1, Context and Dependency Injection 1.0, and Java API for RESTful Web Services 1.1 showing the complete stack. A self-paced and an extensive hands-on lab covering this article and much more is also available here. A video (47-minutes) explaining how to build a similar application can be viewed here.

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Cannot install nodejs and npm

    - by user809829
    I'm trying to install nodejs and npm, however, it fails. This is my terminal: sudo apt-get install nodejs npm Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: nodejs : Conflicts: npm E: Unable to correct problems, you have held broken packages. What to do? I'm kinda lost :(

    Read the article

  • Why does my touchpad fail on resume from standby?

    - by pst007x
    On resume touchpad disables and a reboot is needed to re-activate. Macbook Pro 6.1 Ubuntu 11.10 MAC 64bit Suspend - ok Suspend Resume - ok However, on resume in the loggin screen my touchpad works, but after I enter my password and return to the desktop the touchpad fails. A usb mouse still works fine. I have to re-boot in order to re-enable toe touchpad. This was not an issue when I had Ubuntu 10.10 32bit installed. The install was a fresh install. The bcm5974 driver will not install, says codependency errors. I manually try to install all dependencies and I get this error: E: hid-dkms: subprocess installed post-installation script returned error exit status 10 E: bcm5974-dkms: dependency problems - leaving unconfigured Thanks

    Read the article

  • How to install packages which apt-get can't find?

    - by newcomer
    Hi, I need these packages to build Android source. But I am getting this error: $ sudo apt-get install git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev ia32-libs x11proto-core-dev libx11-dev lib32readline5-dev lib32z-dev [sudo] password for asdf: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libc6-dev-i386 E: Unable to locate package lib32ncurses5-dev E: Unable to locate package ia32-libs E: Unable to locate package lib32readline5-dev E: Unable to locate package lib32z-dev I tried to download & install say libc6-dev-i386 debian package form here. But when I double click on the .deb file Ubuntu Software Manager says wrong architecture 'amd64'. (My OS: Ubuntu 10.10 (updated), Processor: AMD phenom II.)

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >