Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 598/1018 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • On Screen Coin Animation

    - by Siddharth
    am working with side scrolling skater game. I want to perform coin animation such that as player collect coin it moves upside and attach with currency sprite. My main character and coin present in game scene and currency sprite present in HUD layer. This situation creates problem for me. Directly I can not apply modifier to coin because it is side scrolling game so based on main character speed it reaches at different position. That I have checked. So that I have to generate other coin at same position at game layer coin has, in HUD layer and move upward to it. But I didn't able to get its y position correct though I can able to get x position correctly. Many time main character goes downward so it get minus value many time. I also tried following code float[] position = GameHUD.this .convertSceneCoordinatesToLocalCoordinates(GameManager .getInstance().getCoinX(), GameManager.getInstance() .getCoinY()); But I am getting same coordinate as I provide. No difference in that so please some one provide me guidance in that. Because I am near to complete my game. EDIT: Here game layer and hud layer is totally different. Actual coin present in game layer which player has to collect and at same position I want to generate another coin in hud layer to perform some animation. It is recommended to generate coin in hud layer because through that only I can able to complete my target.

    Read the article

  • Remote Desktop via VPN from Mac to Windows Vista

    - by Vegar
    I have some problems connecting to my office from home. I'm getting the following error message: Remote Desktop Connection cannot verify the identity of the computer that you want to connect to. Try reconnecting to the Windows-based computer, or contact our administrator. I have downloaded CoRD, and for some reason, that works okay. I can also connect from a Windows 7 running on VMWare Fusion. On Windows 7, I use SonicWall Global VPN Client, and on the Mac, I use VPN tracker, if that is related... What's going on?

    Read the article

  • Dell Multi-Monitor Hub: true DisplayPort splitting?

    - by thepurplepixel
    In my search for a new display, I came across the Dell Multi-Monitor Hub MMH11, which seemed to be an alternative to my search for daisy-chainable DisplayPort displays. However, before I cave and spend $179 on this device, I am wondering if this will be similar to other splitting devices where it appears to the computer as one big monitor and the device does the splitting (which I don't want). Or, does this use the packet-based nature of DisplayPort to present two/three separate displays to the computer? Also, would this device work on my MacBook Pro? (I know the Dell site says it's for Windows, but it also says that no driver installation is required. I'd assume since the MBP supports DP 1.2 it would work, but it's better to ask). Thanks!

    Read the article

  • Pixelated PDF in Apple Preview slideshow mode, but not in regular window

    - by Zack
    I have a PDF which is a presentation exported from OpenOffice. Two of the slides in this presentation have embedded .eps graphs. When I run the presentation using Preview's slideshow mode, the graphs are severely aliased and the axes are illegible. But when I just view the PDF in regular windowed mode, the graphs are properly antialiased and legible. Is there any way to get Preview to do the same display that it does in windowed mode, but in fullscreen (no window title, no menu bar)? (I don't want to just run the presentation from OpenOffice, because OpenOffice shows the same horrible aliasing effects plus it takes about 30 seconds to show the slide. I don't have, and don't want, Acrobat or MS Office. However, please do feel free to suggest other programs for doing PDF-based slideshows.)

    Read the article

  • Oracle Cloud and Oracle Platinum Services Announcements

    - by kellsey.ruppel
    Live Webcast - Oracle Cloud and Oracle Platinum Services Announcements Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT View your local time Live Webcast Register to watch at your desk! Don't have an Oracle account? Sign up now!  Why do I need an account? Register Now! Please join Larry Ellison and Mark Hurd for important Oracle announcements. Be among the first to learn about new developments in Oracle’s cloud strategy and game-changing advances in Oracle Support.  Register Now! Are you based in the San Francisco Bay Area? Register to attend the live event in Redwood Shores. Oracle values your privacy, and will treat the information we collect from you as a result of your registration and participation in this activity in accordance with the Oracle Privacy Policy. Event Details: Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT Live Webcast Stay Connected:     Join the conversation: #oraclecloud #oraclesupport

    Read the article

  • Why is my ethernet interface in promiscuous mode

    - by nhed
    I read that seeing a flag of M in netstat -i is the way to tell which of your interfaces is in promiscuous mode I run it and I see that eth1 is in promiscuous mode $ netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0 1770161198 0 0 0 57446481 0 0 0 BMRU lo 16436 0 97501566 0 0 0 97501566 0 0 0 LRU This seems to be the case on all the machines I checked (All Centos6.0, both virtual and physical), any idea why ethernet devices would be in such a mode unless someone was running any pcap based app (sudo lsof | grep pcap shows nothing)? I did not see any mention of promiscuous in any of the config files (sudo grep -r promis /etc) Any ideas what puts the interface into that mode and why? p.s. most of the posts I see seem to be security related, this is not that

    Read the article

  • MSDN Live 2010 &ndash; Delivered : 24 sessions (4 x 6) on Visual Studio and Team Foundation Server

    - by terje
    We (Mikael Nitell and me) got a whole track on the Norwegian MSDN Live tour this year.  We did these as a pair, and covered 4 cities over 4 days, 6 sessions per day, taking 8 hours to come through it.  The Islandic volcano made the travels a bit rough, but we managed 6 flights out of 8. The first one had to go by van instead, 7-8 hour drive each way together with other MSDN Live presenters – a memorable tour! Oslo was the absolute top point.  We had to change hall to a bigger one. People were crowding, and even the big hall was packed!  The presentations were mostly based on demos, but we had a few slides as well.  They have been uploaded to my SkyDrive.  Info to aliens – some of the text may be Norwegian. The sessions were as follows: Overview of news in Visual Studio and Team Foundation server 2010 Ensuring Quality with VS/TFS 2010 Releasing products with VS/TFS 2010 No More No Repro with VS/TFS 2010 Performance Testing and Parallel Programming with VS/TFS 2010 Migrating to VS/TFS 2010 Tips, tricks, news and some best practices with VS/TFS 2010   In the coming days, I will post up examples from the demos too, with explanations of how they are intended to work. These entries will also contain stuff we had to remove from the actual presentations due to the time constraints. We managed to create recordings of two of the sessions, which will be uploaded to Channel 9 by Microsoft, afaik.   I will update this blog with information about exact locations when that is done. Also note we’re (read:Osiris Data AS) running both Upgrade and Deep Dive courses  on VS/TFS 2010 now in May.  Please look here for more info. If you want to be informed, follow me on Twitter.  All blog entries will be announced on twitter.

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • Rails time stamps on images in CSS

    - by brad
    Just posted this on Stack but realized it may be more appropriate here: So Rails time stamping is great. I'm using it to add expires headers to all files that end in the 10 digit timestamp. Most of my images however are referenced in my CSS. Has anyone come across any method that allows for timestamps to be added to CSS referenced images, or some funky re-write rule that achieves this? I'd love for ALL images in my site, both inline and in css to have this timestamp so I can tell the browser to cache them, but refresh any time the file itself changes. I couldn't find anything on the net regarding this and I can't believe this isn't a more frequently discussed topic. I don't think my setup will matter because the actual expiring will hopefully happen the same way, based on the 10 digit timestamp, but I'm using apache to serve all static content if that matters

    Read the article

  • SQL Server plus small files

    - by user1467163
    I have a MSSQL server, 3 volumes, that runs some processes that seem to take way too long. One of these processes reads in a zip file, then writes to a database based on what's in the zip file.... for each record. I have 2 volumes in use and am creating the third- so I am trying to plan how to do this. OS has to remain on vol. 1. The TLogs should probably go on the new volume and the mdf's on the existing vol.2.. Do I put the file store on the volume with the MDF's so they don't interfere with the TLog writes, or with the TLogs so they don't interfere with the TLog flush to the MDFs? I know it's best to have more servers / volumes but I have to make do with whats on hand for now. I appreciate any suggestions.

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

  • Moving Away from Exchange public folders - Export to file system folder?

    - by Mr. Monkey
    I have a public folder that was used for the wrong reason. Due to some regulations we had to store lots of photos, we're talking at least 7000 photos that are stored based on a location of stores. So for example, each store would send in an email with at least 2 photos of their location, that email would contain their location name or number, and those photos, so there was some sort of organization for it. I would love to move the contents of that public folder to a normal windows folder we could share on a server. Is anything like that possible? Anybody have other ideas?

    Read the article

  • A Quarter Century of SPARC

    - by kemer
    You might have missed an interesting milestone: the 25th anniversary of SPARC. Twenty-five years! Almost 40% of my life: humbling, maybe a little scary. When I joined Sun Microsystems in 1988, SPARC was just starting to shake things up. The next year we introduced the SPARCstation 1, which had basically triple the performance of our Motrolla-based Sun–3 systems. Not too long after that, our competition began a campaign of “SPARC is dead.” We really distressed them with our success, in spite of our small size. “It won’t last.” “It can’t last!” So they told themselves. For a stroll down memory lane take a look at this page. I remember the sales meeting we had in Atlanta to internally announce the SPARCstation 1. Sun hadn’t really hit the big times, yet. Our much bigger competitors viewed us as an ill-mannered pest, certain of our demise. And, why wouldn’t they be certain: other startups more our size, such as Apollo (remember them?), Silicon Graphics (they fought the good fight!), and the incredibly cool Symbolics are memories. Wait! There was also a BIG company, DEC, who scoffed at us: they are history, too. In fact, we really upset them with what was supposed to be an internal-only video production that was a take-off on Bruce Lee movies, in which we battled the evil Doctor DEC – complete with computer mice (or is that “mouses”?) wielded like nun chucks with the new SPARCstation 1 somehow in the middle of everything. The memory is vivid, but the details hazy. After all, that was almost a quarter century ago. So, here’s to Oracle’s SPARC: still going strong after all these years. – Kemer

    Read the article

  • What causes memcache to delete keys?

    - by Arkaaito
    Our memcache install recently started removing keys, and we're not sure why. Large groups of keys vanish at the same time. Memcache reports that evictions are low to non-existent, and our app has no way to clear memcache (it can only delete specific keys). Even keys of which the app has no knowledge get deleted, so we're pretty convinced they're getting expired. However, our memcache configuration hasn't been touched in some time. Has anyone debugged an issue like this before, and if so, are there any steps you'd recommend we take? How flexible is memcache's expiration policy - is it possible that we're suddenly running into a criterion based on (say) write frequency to a key?

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • does it still have any sense to directly drop mails that trigger RBLs?

    - by Luke404
    Once upon a time, using RBLs to drop mails was actually a good idea. These days seems it is no more possible for a reason or the other, so every one switched / is_switching to just use RBLs as another test in score based antispam solutions (read: SpamAssassin & friends). This gives good results, but neglects one of the benefits of RBLs, namely the ability to reject (supposed) spam before even receiving the message body. Is still there any RBL that makes sense to use that way, to hardly reject anything that fires a match in that list? If there are people doing it that way, do you ever get false positives due to the list?

    Read the article

  • Help with proposed iSCSI SAN VMware implementation.

    - by obsidian
    We have four (with plans to grow to four more) Dell servers with 6 NICs. They are running VMware ESXi 4.1. We would like to connect all of them to an Openfiler iSCSI SAN via HP ProCurve 1810G switches. Based on the design below, is there anything I should be concerned about or anything unusual that I should look out for when making the iSCSI network configurations on the servers, switches and OpenFiler? Should I bond the connections on the servers or simply setup them up for failover? The primary goal is to maximize IOPS. Thanks in advance.

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 1)

    - by Geertjan
    While setting up PostgreSQL from scratch, with the aim to use it in NetBeans IDE, I found the following resources helpful: http://railskey.wordpress.com/2012/05/19/postgresql-installation-in-ubuntu-12-04/ http://ohdevon.wordpress.com/2011/09/17/postgresql-to-netbeans-1/ http://ohdevon.wordpress.com/2011/09/19/postgresql-to-netbeans-2/ For quite a while I had problems relating to  "/var/run/postgresql/.s.PGSQL.5432", which had something to do with "postmaster.pid", which I somehow solved via a link I can't find anymore, and which may not have been a problem to begin with. A key moment was this one, which was useful for setting the password of a new user I'd created: http://stackoverflow.com/questions/7695962/postgresql-password-authentication-failed-for-user-postgres This was useful for setting up a table in my database, which I did by pasting in the below into NetBeans after I made the connection there: http://use-the-index-luke.com/sql/example-schema/postgresql/where-clause Now I have a database set up with all permissions everywhere (which turned out to be the hard part) correct: The next step will be to create a NetBeans Platform application based on this database. I'm assuming it shouldn't be any different to what's described in the NetBeans Platform CRUD Tutorial.

    Read the article

  • Will people respect a Masters of Science in IT w/software engineering concentration from RPI?

    - by twneale
    Here's my thing: I got my undergraduate degree in political science, then a law degree. Then I figured out that I love programming and I'm pretty good at it too. It's fun and rewarding enough for me that I'd prefer to do it for a living over almost any form of pure law practice. So I'm looking at getting a masters degree to put some weight behind a possible career switch. If I actually want to develop software (web, in particular), would people in programming circles respect a master's of science in IT? Specifically, consider as an example the MS in IT from Rensselaer Polytechnic Institute (with a concentration in software engineering). Here's the home page: http://www.rpi.edu/IT/graduate/masters_program.html In particular, I mean to draw a contrast between IT as specifically contemplated by the RPI masters program (an interdisciplinary tech/business program) and other MS degrees in computer science or software engineering that focus more on the science and technical aspects. I guess I want to make sure that other programmers would respect my credentials and not consider me as different or underqualified based on the connotations of the phrase "IT". I believe RPI has an unimpeachable reputation for hard science, and the program seems excellent, but it still matters to me how people in industry would perceive it.

    Read the article

  • Mod a Swing Arm Lamp into an Adjustable Camera Stand

    - by Jason Fitzpatrick
    If you’re looking for a simple way to get a bird’s eye view to record your DIY projects or other table-based activities like gaming or tinkering, this simple modification to a swing-arm lamp offers a highly flexible camera mount on the cheap. IKEAHacker reader Stef needed an adjustable arm for his iPhone camera so he could record in a top-down-view for some drawing tutorials he was working on. Rather than shell out big bucks for a custom boom arm, he scrounged up a swing arm lamp with a broken shade in the as-is bin at his local IKEA. To mount the iPhone he simply attached a car mount for the iPhone to the swing arm and called it good. Hit up the link below for more pictures; even if you don’t have an IKEA nearby, swing arm lamps are cheap and easy to acquire. Forsa Camera Stand [IKEAHackers] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • Kill Leaking Connections on SQL Server 2005

    - by Thierry Brunet
    We have a legacy ASP application that somewhere leaks SQL Connections. In Activity Monitor, I can see a bunch of idle processes with Last Batch times over an hour old. When I look at the T-SQL command batch, these are always FETCH API_CURSORXXX, which from my understanding is caused by improperly closed ASP ADO Recordsets. While we are try to pinpoint the offeding code, is there a way for me to monitor which requests open which cursors? I'm assuming profiler, but I'm not sure what I should be monitoring exactly. I can see a bunch of calls to sp_cursoropen but I don't see the API_CUSORXXX name anywhere. Second, would anyone be able to suggest a script we could run to kill these processes based on the Last Batch time 10 minutes and Last Batch Command being FETCH API_CURSORXXX? For various reasons, we unfortunately don't have any SQL Server DBAs.

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >