Search Results

Search found 26093 results on 1044 pages for 'process monitor'.

Page 351/1044 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Why does the token returned by LogonUser() in Win x64 not belong to LOCAL group?

    - by edwinbs
    Hi, I have a piece of code that calls LogonUser() followed by CreateProcessAsUser(). In Win32, the resulting process belongs to a user (say, TESTDOMAIN\user1) who belongs to the LOCAL group. However, in x64, the process owner does not belong to LOCAL. The owner still belongs to all other groups (Authenticated Users, Everyone, etc.) Does anyone know if this is a documented behavior change? Or am I supposed to put some special flag in x64 when calling LogonUser()? Thanks.

    Read the article

  • What Application Indicators are available?

    - by user8592
    I installed Ubuntu 11.04 on one of my systems and I am using the Unity interface. Unity is working quite well so far but I really miss panel applets for net speed, cpu temp, and system monitor. These applets are useful for viewing quick info. Unlike 10.10, there is no other way to get this info onto the panel or unity launcher. There are solutions like screenlets and conky but they don't feel appropriate for a clean desktop look. If you know one then please list out any third party indicators with links so that they can be found.

    Read the article

  • UIActivityIndicator not working properly?

    - by medma
    Hello frends, I have a problem regarding UIActivityIndicator. I applied "[spinner startAnimating]" at the IBAction on a button and then doing some process. After the process activityindicator should be stopped and then navigate to another view. But the activity indicator does not appear. When I remove the line "[spinner stopAnimating]" then the indicator appears but not at the instant button is pressed. It appears just before the other view loads, and apparently does not appear, I mean it does not appear but if we see very carefully then only it appears for an instant. Thanx in advance for any answer.

    Read the article

  • How to detect Out Of Memory condition?

    - by Jaromir Hamala
    I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them) Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs. Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it. Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!

    Read the article

  • Problemas com instalação do Ubuntu 12.04 LTS e 13.04

    - by user160096
    Não consegui, apesar de diversas tentativas, instalar nenhuma das versões: 12.04 LTS e 13.04. Cheguei a trocar de mouse,uma vez que o anterior não era reconhecido pelo sistema. Configuração: Motherboard Gigabyte GA-970A-D3 8 Gb de memória DDR-3 1 Hd Sata II Samsung de 80Gb, com Windows 7 Ultimate SP-1 1 Hd Sata II de 1Tb Samsung, como dispositivo de dados 1 Monitor 23" Phiips CL 234 1 Placa de Vídeo Gigayte NVidia GeForce GT-220 1 Placa Ethernet Realtek RTL 8139/810x 1 Mouse Microsoft (com software IntelliPoint 8.2) 1 Mouse Logitech M-100 (que usei para subsitutir o da Microsoft, SEM SUCESSO!!!) Na última tentativa, o instalador do ubuntu (tanto no 12.04 quanto no 13.04, PASMEM, não reconheceu o Win7 instalado...foi aí que 'JOGUEI A TOALHA"...! Apesar de minha simpatia pela liberdade e SO's livres e bons, dificuldades como esta desencorajam a transição/migração do usuário... É de se pensar sobre isto...!

    Read the article

  • Including the functionality of a tool within another program?

    - by darren
    Hi there I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap. The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only? One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves. Thanks for any advice or suggestions.

    Read the article

  • How Does Windows Confirm Wi-Fi Access and Whether Hot Spot Authentication Is Necessary?

    - by Jason Fitzpatrick
    Windows is quite adept at telling you if you have a properly functioning Internet connection, but how exactly does it do so? Digging into how Windows handles the problem offers insight into Windows connectivity messages. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • php + upload.class + not overwriting the image

    - by Paulo
    Hi, i am trying to upload a file with upload.class and i need to overwrite the file when the user upload a new one. But instead of overwriting, he is putting photo_01, photo_02, etc... The code: $foto = new Upload($_FILES['photo']); $fotot = new Upload($_FILES['photo']); $fotot->image_resize = true; $fotot->image_y = 110; $fotot->image_x = 110; $foto->file_new_name_body = "photo"; $fotot->file_new_name_body = "photo"; $foto->file_ovewrite = true; $fotot->file_ovewrite = true; $fotot->Process("{$dir_fotos}thumbs/"); $foto->Process("{$dir_fotos}"); Somebody has already passed by this or has a solution??? notice that i'm using file_ovewrite = true; thanks

    Read the article

  • Windows 8 for productivity?

    - by Charles Young
    At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible. With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume. Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far? Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu. What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance. The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive. The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears. Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows. Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.

    Read the article

  • "input not supported" at login screen after ati driver is installed

    - by squalo78
    I'm running ubuntu 11.10 and I installed the Ati driver from the oficial page. When i reboot, the grub and the splash screen are working (at lower resolution) but instead of the login screen, it shows "input not supported" message. If I use "Ctrl+Alt+ keypad +" I can see my login screen at 640x480 resolution and login. I don't know how to make login screen displays 1440x900@60, that is the resolution set on my session. I'm running Ubuntu 11.10 with ati hd4200 video card, a monitor acer aL1916w that supports the resolution 1440x900.

    Read the article

  • mod_perl memory leak

    - by marghi
    Hello, I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared. I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ? Thank you very much.

    Read the article

  • What is the easiest way to send a Javascript array via JSON to PHP?

    - by dscher
    I have a few arrays that I want to send to process with PHP. Using json2.js I will stringify the arrays like so: var JSONlinks = JSON.stringify(link_array); var JSONnotes = JSON.stringify(note_array); but then I'm confused. Do I need to use a XMLHttpRequest object? Is there another way? If that is the simplest way, could someone please just share the most basic instance of the code needed in order to send to PHP where I can then use JSON decode? I think it might help others in the future really. I'm currently using Jquery and I know there are many options out there for frameworks and each one may or may not make this process any easier. If you're using a framework in your reply please mention why you'd choose that framework rather than just javascript.

    Read the article

  • entity framework and dirty reads

    - by bryanjonker
    I have Entity Framework (.NET 4.0) going against SQL Server 2008. The database is (theoretically) getting updated during business hours -- delete, then insert, all through a transaction. Practically, it's not going to happen that often. But, I need to make sure I can always read data in the database. The application I'm writing will never do any types of writes to the data -- read-only. If I do a dirty read, I can always access the data; the worst that happens is I get old data (which is acceptable). However, can I tell Entity Framework to always use dirty reads? Are there performance or data integrity issues I need to worry about if I set up EF this way? Or should I take a step back and see about rewriting the process that's doing the delete/insert process?

    Read the article

  • Oneiric is freezing. Need help diagnosing and filing a bug

    - by mlissner
    Six months ago, I bought a new Sandy Bridge CPU and built myself a nice desktop machine. Until now it hasn't worked...at all. I finally have gotten it working now that Oneiric is released, but it freezes every so often, making it little more than a semi-functional temptation. What happens when the system freezes is: the music I have playing enters into about 5s loops. SSH fails the monitor freezes the mouse freezes the keyboard fails The only way to fix it is to do a hard reset...and that sucks. I'd love to at least figure out the source of the freeze so I can file a bug. I've looked in dmesg, kern.log, and the X.org logs. Nothing interesting is any of them. Since SSH fails, I'm convinced it's not an X crash: https://wiki.ubuntu.com/X/Troubleshooting/Freeze Anything else I can do?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • cpu temperature imrpoves very fast

    - by myildirim
    Last few days i found out that when i click google chrome and the open it my laptop starts being angry.That's fans are working hardly and i can not touch the mousepad because of heating.I use Ubuntu 11.10 on my Toshiba A350-22z laptop and monitor cpu and harddrive temperature.Both cores reached 104 Celcius and i read somewhere "if your processor reaches 105Celcius it harms itself".I cleaned inside of laptop a year ago but there is a point : until wheather becomes about 20Celcius there was no problem.I know hardware cleaning is the best solution but how can i solve with another way ? I think problem is about outside's hot wheather.Is there anybody that has the same problem.In addition to Google Chrome i realized that when i open online video the processor temperature increas very fast.

    Read the article

  • Google analytics - vistor path to specific site destination setup and monitoring?

    - by Joshc
    I have a website which I am using google analytics to track visitors and track our banner campaigns. We're are promoting 'Purchase Ticket' buttons on our website which push visitors to a third party website who sell and distribute our tickets. The url on all the 'Purchase Ticket' buttons are the same through out the site... Example: http://ticketmaestro.com/events/my-event-2012 In the analytic control panel, is it possible so set something up, where I create a path-to-destination using the above example url? ...and then after this is setup: I want to be able to monitor the path visitors are taking from when they reach the site - to when they click the 'Purchase Ticket' button. Graphs will show... Start Destination Path to Final Destination Final Destination: http://ticketmaestro.com/events/my-event-2012 Any help, suggestions, terminology would be great thanks. Josh

    Read the article

  • Build Pipelining and Continuous Integration with Maven and Hudson

    - by Brandon
    Currently the my team is considering splitting our single CI build process into a more streamlined multi-stage process to speed up basic build feedback and isolate different ci concerns. The idea we had was to have each stage exist in Hudson as a different build with the correct maven goal or maven plugin execution, then chain them together using the post-build hooks of Hudson. However to my knowledge, Maven as a build tool mandates that any lifecycle phase which is performed automatically builds every preceding lifecycle phase. This presents a number of problems the most significant of which is that maven is recreating the build resources with each distinct call and not using those of the previous stage. This not only breaks the consistency of the build lifecycle but has much more unnecessary processing overhead. Is there a way to accomplish pipelining with CI using Maven? Assuming there is, is there a way to let Hudson know to use those resources built from the previous stage in the next one?

    Read the article

  • in c# a label that displays info, then when clicked it opens firefox and searches for the info displ

    - by NightsEVil
    hey i have a label that displays graphics card name and make and stuff and i'm working on having it so that when its clicked, it opens Firefox and searches Google for the info found by "name" i tried using let met Google that for you but it searches for like each work individually well this is what iv tried so far and it kinda works but there's something wrong private void label13_Click(object sender, EventArgs e) { ManagementObjectSearcher Vquery = new ManagementObjectSearcher("SELECT * FROM Win32_VideoController"); ManagementObjectCollection Vcoll = Vquery.Get(); foreach (ManagementObject mo in Vcoll) { System.Diagnostics.Process CcleanerA = System.Diagnostics.Process.Start(@"C:\Program Files (x86)\Mozilla Firefox\firefox.exe", "http://google.com/?q="+(mo["name"].ToString())); } }

    Read the article

  • In WPF Not Able to Kill Adobe Acrobat Reader

    - by Blam
    Problem with killing Adobe Acrobat Reader Code used to launch the process nativeProcess = new Process(); nativeProcess.StartInfo.FileName = filePath; nativeProcess.Start(); These are Microsoft Office Files or Adobe Acrobat. Code used to kill if (nativeProcess != null) { try { nativeProcess.Kill(); } catch { } } if (nativeProcess != null) { try { nativeProcess = null; } catch { } } My problem is that Adobe Acrobat Reader does not Kill() when the app is running as a Citrix app. If I log on at the Citrix box and run it as a Desktop app then Acrobat Kill() works. Microsoft Office Kills() fine as both a Citrix app and a Deskop app. User can close Acrobat Reader using the X or close

    Read the article

  • handling activity destruction in multithreaded android app

    - by Jayesh
    Hi, I have a multithreded app where background threads are used to load data over network or from disk/db. Every once in a while user will perform some action e.g. fetch news over network, which will spawn a background AsyncTask, but for some reason user will quit the app (press back button so that activity gets destroyed). In most such scenarios, I make appropriate checks in the background thread after it returns from n/w i/o, so that it won't crash by accessing members of the activity that is destroyed by now. However some corner cases are left where crashes happen, because the background thread would access some member of activity that is now null. Do other Android developers have some generic/recommended framework to handle such scenarios? These are the times when I wish android would have guaranteed termination of all threads when activity destroys (in the same way that regular linux process cleans up when it's quit)... but I guess Android devs had good reasons for not exposing process lifetimes through the api.

    Read the article

  • how to selfhost wcf without iis

    - by dotnetcoder
    Reading up on WCF we have self hosting option available , one limitation here is we have to manage the host process lifecycle ourselves. What I am exploring here is to run the service without IIS and do a self hosting. Few things come to mind - How will request management work here. In case of IIS it manages the request and give control to dotnet on a particular thread. In absence of IIS do we need to write code ourselves to manage incoming requests ( say on a tcp port ) or WCF provides some classes to manage request and spawn threads to process each thread. I am aware that in case of self hosting this needs to be a windows service. In case of self hosting how can me tap on the number of simultaneous requests on the sever , it can be managed by limiting the thread pool ? or we can configure this via wcf ? Thanks dc

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >