Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 800/2072 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • Evaluating mean and std as simulations are added

    - by Luca Cerone
    I have simulations that evaluate a certain value X. I run the simulations several times and save the value of X in a vector V. When all the runs have finished I evaluate the mean and standard deviation for the vector V. This approach works, but implies saving all the values for X. As my computer is quite old and with limited ram, I was wondering if there is a way to update the mean value M and the standard deviation S, knowing the value of X at the (n+1)-th run, and the values of M and S after n runs. How can I update the mean value and the standard deviation as simulations are added to the set? Please note that this is just a conceptual example, I don't save only one number X but thousands at each simulations, so I really have problems running a big number of runs if I have to keep all the past values into the memory.

    Read the article

  • Teaching logical/analytical thinking

    - by Joshua
    I have been trial running a club in which I teach programming for the past year and while they have progressed what they really lack is the most fundamental concept to programming, analytical thinking. As I now approach the second year of teaching to the children (aged 12 - 14) I am now realising that before I begin teaching them the syntax and how to actually program an app (or what they would rather, a game) I need to introduce them to analytical thinking first. I have already found Scratch and similar things such as Light-Bot and will most certainly be using the, to teach them how to implement their logical thinking but what I really need are some tips or articles on how to teach analytical thinking itself to children aged 12 - 14. What I'm looking for are some ideas on how to teach the kind of thinking that these kids will need in order to get them into programming, whether that be analytical, logical or critical. How and what should I teach them relating to the way their minds need to be wired when programming solutions to problems?

    Read the article

  • Is remmina 1.0 in the standard universe repositories?

    - by jackweirdy
    I just noticed that the copy of remmina I have on my machine (Running ubuntu 12.04) is 9.99.1 (This is up to date according to apt). The remmina website says that the most recent version is version 1.0 which uses FreeRDP. I'd like to use FreeRDP instead of rdesktop because of the improved MS RemoteApp support. To cut to the chase, is version 1.0 of remmina in the repos, or do I have to install it manually? (I've had a quick browse but haven't found anything). Added:

    Read the article

  • Best option for PDF viewer embedded in web app

    - by RationalGeek
    I have a web app that needs to be able to display a PDF. It needs to allow the user to page through the PDF, and my application needs to be able to know which page is currently being viewed, because other aspects of the web app will change based on the current page. Ideally it would not be dependent on the client having Adobe Reader but I could probably support that dependency. What are my best options for this? My application stack consists of ASP.NET 4 along with optionally Silverlight 5. Also, I could use something that is client-side based as well using JavaScript / HTML if such a thing exists. I found ComponentOne's offering for this and that seems like the leading candidate at this point, but I want to know if there are other options I should consider. Edit: Per Fosco's comment, converting the PDF to another format (such as HTML) might be an option, as long as I could tie back parts of the converted document to the original PDF page #s. Another note: this has to run entirely on our servers. It would not be acceptable to use a third-party service to view the PDFs.

    Read the article

  • Unity Bar auto-hide behaviour and application icons placement

    - by Andrei
    The first issue: It seems that sometimes when I hover to the left edge of the screen the Unity Bar will not stay on top of other windows even if I continue to hover the cursor above it, at other times it will stay on top. Is this a normal behaviour? Or am I affected by some bug / inconsistency? If it's normal, what's the logic behind it? The second issue: Application icons for running applications do not maintain their position in the Unity Bar but instead move around according to some weird rules (if any?) that I can't understand. Is this to be expected, or is it a bug? Is there a way to force them to stop moving around? I like to see certain apps in certain positions and this bothers me.

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • What's better than outputdebugstring for windows debugging?

    - by Peter Turner
    So, before I came to my current place of employment, the windows OutputDebugString function was completely unheard of, everyone was adding debug messages to string lists and saving them to file or doing showmessage popups (not very useful for debugging drawing issues). Now everybody (all 6 of us) is like "What can I say about this OutputDebugString?" and I'm like, "with much power comes much responsibility." I kind of feel as though I've passed a silent but deadly code smell to my colleagues. Ideally we wouldn't have bugs to debug right? Ideally we'd have over 0% code coverage, eh? So as far as petty debugging is concerned (not complete rewriting of a 3 million line Delphi behemoth) what's a better way to use debug running code than just adding OutputDebugString all over?

    Read the article

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • Learning Erlang vs learning node.js

    - by Noli
    I see a lot of crap online about how Erlang kicks node.js' ass in just about every conceivable category. So I'd like to learn Erlang, and give it a shot, but here's the problem. I'm finding that I have a much harder time picking up Erlang than I did picking up node.js. With node.js, I could pick a relatively complex project, and in a day I had something working. With Erlang, I'm running into barriers, and not going nearly as quickly. So.. for those with more experience, is Erlang complicated to learn, or am I just missing something? Node.js might not be perfect, but I seem to be able to get things done with it.

    Read the article

  • android game performance regarding timers

    - by iQue
    Im new to the game-dev world and I have a tendancy to over-simplify my code, and sometimes this costs me alot fo memory. Im using a custom TimerTask that looks like this: public class Task extends TimerTask { private MainGamePanel panel; public Task(MainGamePanel panel) { this.panel=panel; } /** * When the timer executes, this code is run. */ public void run() { panel.createEnemies(); } } this task calls this method from my view: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if(enemyCounter < 24){ enemies.add(new Enemy(bmp, this)); } enemyCounter++; } Since I call this in the onCreate-method instead of in my views contructor (because My enemies need to get width and height of view). Im wondering if this will work when I have multiple levels in game (start a new intent). And if this kind of timer really is the best way to add a delay between the spawning-time of my enemies performance-wise. adding code for my timer if any1 came here cus they dont understand timers: private Timer timer1 = new Timer(); private long delay1 = 5*1000; // 5 sec delay public void surfaceCreated(SurfaceHolder holder) { timer1.schedule(new Task(this), 0, delay1); //I call my timer and add the delay thread.setRunning(true); thread.start(); }

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • Why would a server not send a SYN/ACK packet in response to a SYN packet

    - by codemonkey
    Lately, we've become aware of a TCP connection issue that is mostly limited to mac and Linux users who browse our websites. From the user perspective, it presents itself as a really long connection time to our websites (11 seconds). We've managed to track down the technical signature of this problem, but can't figure out why it is happening or how to fix it. Basically, what is happening is that the client's machine is sending the SYN packet to establish the TCP connection and the web server receives it, but does not respond with the SYN/ACK packet. After the client has sent many SYN packets, the server finally responds with a SYN/ACK packet and everything is fine for the remainder of the connection. And, of course, the kicker to the problem: it is intermittent and does not happen all the time (though it does happen between 10-30% of the time) We are using Fedora 12 Linux as the OS and Nginx as the web server.

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Windows XP with Ubuntu 14.04 on 2 separate hard drives

    - by maplenet2
    I am new to Ubuntu. I have Windows XP Professional 32-bit on one 300GB IDE hard drive and Ubuntu 14.04 running on another 61GB IDE hard drive, and I cannot get my Windows XP to boot with Grub! When I select Windows XP from the boot menu, Grub just restarts my computer. The computer I have with those two hard drives is a Dell Optiplex GX240, so the hardware is old, and its BIOS won't let me change the boot priority on the two IDE hard drives. What can I do now? Is there a step I missed when installing Ubuntu? Can I edit Grub to boot Windows XP without messing with the BIOS? Do I have to downgrade to an older release of Ubuntu to make it work? I am willing to reinstall Ubuntu, if that's what it takes.

    Read the article

  • Screencast several application windows at once in Microsoft Windows

    - by Birt
    I have several (20+) applications running on a Microsoft Windows PC. What I would like is a solution that allows me to broadcast the window of each application in a webpage, in readonly mode (there's no need for the users to interact with it). This should work even if the application is in the background, seeing that there's no way to fit all of them on the screen. I performed very extensive searching, from simple screencasting apps such as Camtasia, CamStudio or VHScrCap to things like VNC (haven't found any server able to broadcast multiple windows at once, much less background windows) and even application virtualization, but in the end I haven't found anything that fits my needs. Most solutions that allow capturing a window instead of the whole desktop will not let you capture multiple windows but only a single window and on top of that they don't even work when the window is in the background.

    Read the article

  • Articles on TFS Build Server / MSBuild

    - by MartinWatts
    I have decided to write some articles on using a TFS Build Server. During the past few years I have had the responsibility and challange of keeping one running, and I found out that on some subjects, there is very little to find on the internet. So hopefully my experiences can help others. That is, before VS 2010 build server makes everything we have learnt on MSBuild so far redundant. ;) The first article is about selectively getting the sources you need to get the build done. You can find the article here.

    Read the article

  • How to install vmware in Ubuntu 10.04

    - by piemesons
    I need to install minix3 in vmware. I m using ubuntu 10.04. i downlaoded vmware and now i am trying to install it using:-- sudo apt-get install build-essential linux-headers-`uname -r` chmod +x VMware-Player*.bundle gksudo bash ./VMware-Player*.bundle VM Player Installer window popped up Clicked on the ‘Install’ button The progress bar started going; above the bar, it says that the installer is ‘Configuring’ This is was more than 15 minutes ago and still going. Nothing else is running on the system (consuming CPU, mem, …) Is the ‘Configuring’ step supposed to take this long? Seems to me it might be hung. Question: Did i do something wrong? Is there a log some place that can help me to debug this?

    Read the article

  • Did the developers of Java conciously abandon RAII?

    - by JoelFan
    As a long-time C# programmer, I have recently come to learn more about the advantages of Resource Acquisition Is Initialization (RAII). In particular, I have discovered that the C# idiom: using (my dbConn = new DbConnection(connStr) { // do stuff with dbConn } has the C++ equivalent: { DbConnection dbConn(connStr); // do stuff with dbConn } meaning that remembering to enclose the use of resources like DbConnection in a using block is unnecessary in C++ ! This seems to a major advantage of C++. This is even more convincing when you consider a class that has an instance member of type DbConnection, for example class Foo { DbConnection dbConn; // ... } In C# I would need to have Foo implement IDisposable as such: class Foo : IDisposable { DbConnection dbConn; public void Dispose() { dbConn.Dispose(); } } and what's worse, every user of Foo would need to remember to enclose Foo in a using block, like: using (var foo = new Foo()) { // do stuff with "foo" } Now looking at C# and its Java roots I am wondering... did the developers of Java fully appreciate what they were giving up when they abandoned the stack in favor of the heap, thus abandoning RAII? (Similarly, did Stroustrup fully appreciate the significance of RAII?)

    Read the article

  • Screen Corruption in half the screen only

    - by Guy DAmico
    About 50% of the my NATTY desktop screen is corrupted. Once that happens I can re-boot as many times as I want but the problem continues. If I logout and then into WINDOWS for a day I may be successful and boot UBUNTU with a good screen. The desktop is formatted correctly, there's no pixelation, rather there is a fine grained white crosshatch pattern covering the entire screen. If I open any application the screen corruption worsens eventually to the point I can no longer make out anything. I ran ram memory test w/o any errors. I have no display issues when running WINDOWS 7. Any ideas. My computer is a Dual Boot stock DELL 5150 w/3gig of ram an on board video.

    Read the article

  • Checking if an object is inside bounds of an isometric chunk

    - by gopgop
    How would I check if an object is inside the bounds of an isometric chunk? for example I have a player and I want to check if its inside the bounds of this isometric chunk. I draw the isometric chunk's tiles using OpenGL Quads. My first try was checking in a square pattern kind of thing: e = object; this = isometric chunk; if (e.getLocation().getX() < this.getLocation().getX()+World.CHUNK_WIDTH*World.TILE_WIDTH && e.getLocation().getX() > this.getLocation().getX()) { if (e.getLocation().getY() > this.getLocation().getY() && e.getLocation().getY() < this.getLocation().getY()+World.CHUNK_HEIGHT*World.TILE_HEIGHT) { return true; } } return false; What happens here is that it checks in a SQUARE around the chunk so not the real isometric bounds. Image example: (THE RED IS WHERE THE PROGRAM CHECKS THE BOUNDS) What I have now: Desired check: Ultimately I want to do the same for each tile in the chunk. EXTRA INFO: Till now what I had in my game is you could only move tile by tile but now I want them to move freely but I still need them to have a tile location so no matter where they are on the tile their tile location will be that certain tile. then when they are inside a different tile's bounding box then their tile location becomes the new tile. Same thing goes with chunks. the player does have an area but the area does not matter in this case. and as long as the X and Y are inside the bounding box then it should return true. they don't have to be completely on the tile.

    Read the article

  • how to resize an encrypted logical volume?

    - by Nirmik
    I installed Ubuntu with encryption and LVM on my entire haddisk... Now I want to resize it. How do I do This... Following this link gave me errors on step 2 - How to resize a LVM partition? error ubuntu@ubuntu:~$ sudo e2fsck -f /dev/sda5 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block e2fsck: Superblock invalid, trying backup blocks... e2fsck: Bad magic number in super-block while trying to open /dev/sda5 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 what do I do?

    Read the article

  • How to fix bluescreen in windows 7 with multi-boot?

    - by Ismail Sensei
    I have HP laptop 6730S with two Operating systems : Windows 7 Ultimate 32bit Centos 6.4 64Bit The GRUB2 is not installed in MBR, use Windows' bootloader. After I choose Windows in the start, the blue-screen appears with unmountable_boot_volume problem so I tried some help from similar questions here ( use Command Prompt and enter the following command: chkdsk /R C: ). But the problem is, I can't get repair my computer it took so long and nothing happened after I waited more than 2H and when I put my Windows 7 DVD to boot it charge the files then same thing happened nothing show up so I couldn't use command prompt. But when I use Centos everything works just fine the D partition i can mounted normally but C partition it shows me error and tell me to go to windows and repair it with Chkdsk command and here is where I am stuck.

    Read the article

  • Email client supporting multiple accounts

    - by TGP1994
    I've been using Microsoft Outlook for a very long time, although one thing that has bugged me is how multiple email accounts are handled. As far as I can tell, there isn't a set and straightforward way of managing multiple accounts in one instance of outlook. For example, when I create an email, saving it as a draft will by default dump it into the first personal folder that I have open, which in my current case, is not where I want it. I would like all trash, spam, drafts, contacts, etc. etc. to be handled on a PF by PF basis. Now to my question: Is there a way to accomplish the task of email account "segregation" in Outlook (2007 is my current version), or is there another client that handles this in a more organized fashion? Note: I don't use most of the features in outlook (I hardly even need special formatting for my messages), I generally just send and read mail, and get a few attachments, so leaving Outlook wouldn't be too much of a stretch for me.

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >