Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 557/903 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Where is my ram?

    - by gsedej
    I have 2GB installed on my machine running Ubuntu 12.04. After some time of use, I see much of my RAM used. The RAM does not free enough even though I closed all my programs. I included 2 screenshots. First is "Gnome system monitor" (all process) and second is "htop" (with sudo), both sorted by memory usage. From both you see, that it's not possible that all running apps takes together 1GB of memory. First 7 biggest programs sum 250, but others are much smaller (all can't sum even 100MB). The cache is 300MB (yellow ||| on htop) and is not included in 1GB used. Also 260MB is already on swap. (which actually makes 1,3GB of used memory) If i start Firefox (or Chrome) with many tabs, it has only 1GB available and not potentially 1,5 GB (assume 0,5GB is for system). When I need more ram, swapping happens. So where is my ram? Which program is using it? How can i free it, to be available for e.g. Firefox? Gnome system monitor htop

    Read the article

  • Error 404 when accesing newly created ASP.NET website on IIS 7.0

    - by Wodzu
    I've created an ASP.NET website and published it to a file from visual studio. Then I've copied my folder to the inetpub\wwwrot directory. Next under IIS I've converted this folder to the application. Unfortunately, when I try to acces it like this: http://localhost/myappname I am getting 404 error. I was thinking that maybe IIS is not configured to process aspx files, but under http://localhost/default.aspx there is a working sharepoint website. Have you got any ides where might be the problem?

    Read the article

  • How can I get the best performance for playing games?

    - by Oli
    I've been playing a couple of Wine-games today and decided to switch to metacity to see what the performance difference was like. If you've never done it before, you just run metacity --replace but don't do that if you use Unity! Anyway, surprise surprise it was like playing on a dedicated Windows gaming machine. Playing under metacity today was bliss. Much higher framerates and just a fluidity that you'd expect from a native game. I'm not sure I can go back. Switching to metacity is no hardship but I wonder if there's anything else in the WM landscape that I should try out. I'm essentially looking for suggestions for the best way to play games. Mix up WMs, dedicated X sessions, whatever... As long as it makes Wine games run faster. Small print One process per answer (eg: New X session + OpenBox) We should probably land on a benchmark so we can show percentage improvement over a stock Compiz desktop. I'm open to suggestions in the comments. If people could test it and submit their how much it improves things for them in the comments, that would give others a good idea of if it's worth the pain.

    Read the article

  • Stuck GhostScript processes, how to debug?

    - by Jonathan
    Having a problem with Ghostscript processes that don't end. This does not happen often, probably once every 3 weeks we see this issue with 1-3 processes. Running CentOS 6.4 on a VPS from Rackspace. We use PrinceXML to generate PDFs which uses GhostScript to handle fonts. Here's an image of top: http://i.stack.imgur.com/J9D7D.jpg As you can see those two processes are using a lot of resources, I haven't killed them yet in hopes someone can help me diagnose. I'm a developer not a server admin so I have a basic knowledge of *nix but no clue on how to fix this. Installed strace and ran it on each process with the following command: strace -p 20619 -s 80 -o gs.txt Left it for 5m, gs.txt is empty? Thanks in advance!

    Read the article

  • Why is access to my database very slow?

    - by Fabien
    I have a mysql database that used to work perfectly fine, but now it is dead slow on startup. When I type in $> mysql -u foo bar I get the following usual message for about 30 seconds before I get a prompt : Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Of course, I tried it and it goes a lot faster : $> mysql -u foo bar -A But why do I have to wait so long in regular startup ? This is not a very big database, and data does not seem to be corrupted (everything looks fine after startup). I have no other client connecting to the mysql server at the same time (only one process is shown with the command show full processlist) and I have already restarted the mysqld service. What's going on ?

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Do you keep intermediate files under version control?

    - by Subb
    Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are not the result of some process. The jpg and swc are what I would call "intermediate" files. They are the product of one (or more) source file(s). The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?

    Read the article

  • Why does subshell not inherit exported variable (PS1)?

    - by amn
    After some debugging I finally narrowed down the problem as to why my X session xterm prompt does not appear according to my PS1 setting. If I run sh -c env, it doesn't even show PS1 in the list. Why? export PS1='test' sh -c env # No PS1 in the list, default prompt appearance (shell name + version) Substituting sh with bash yields same result, alas the behavior appears to be the same for both shells/modes. As far as I understood from man bash, the environment resulting from command run by shell with -c should include the exported variables. And it does - exporting FOOBAR results in FOOBAR listed in env run by subshell. It appears that the story is different if the variable is PS1 however. What is going on? I want my prompt propagated throughout the process tree and system. For matters sake, it is set in /etc/profile.d/user.sh (a file I created myself) with the following: PS1='\u@\H \w \$ ' export PS1 I am running Arch Linux (updated yesterday.)

    Read the article

  • How can I instruct nautilus to pre-generate PDF thumbnails?

    - by Glutanimate
    I have a large library of PDF documents (papers, lectures, handouts) that I want to be able to quickly navigate through. For that I need thumbnails. At the same time however, I see that the ~/.thumbnails folder is piling up with thumbs I don't really need. Deleting thumbnail junk without removing the important thumbs is impossible. If I were to delete them, I'd have to go to each and every folder with important PDF documents and let the thumbnail cache regenerate. I would love to able to automate this process. Is there any way I can tell nautilus to pre-cache the thumbs for a set of given directories? Note: I did find a set of bash scripts that appear to do this for pictures and videos, but not for any other documents. Maybe someone more experienced with scripting might be able to adjust these for PDF documents or at least point me in the right direction on what I'd have to modify for this to work with PDF documents as well. Edit: Unfortunately neither of the answers provided below work. See my comments below for more information. Is there anyone who can solve this?

    Read the article

  • Software Center - Items cannot be installed or removed until package catalog is repaired"

    - by Stephanie
    I tried to install back in time and now I keep getting the message 'items cannot be installed or removed until package catalog is repaired. I have tried sudo apt-get install -f then get Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: backintime-gnome The following packages will be upgraded: backintime-gnome 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 1 not fully installed or removed. Need to get 0 B/39.4 kB of archives. After this operation, 24.6 kB of additional disk space will be used. Do you want to continue [Y/n]? when I click Y, I get the following message dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: backintime-gnome E: Sub-process /usr/bin/dpkg returned an error code (1) stephanie@stephanie-ThinkPad-T61:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: backintime-gnome

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • Software Manager who makes developers do Project Management

    - by hdman
    I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is "just writing the design down", it shouldn't take too long, and "all the code should be written before the hardware is ready". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do?

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • Explorer forgetting how to cut and paste files

    - by raimesh
    My Windows XP (SP3) machine (at work) occasionally "forgets" how to cut-and-paste files in Explorer - if I cut or copy a file then go to paste it elsewhere, the "paste" and "paste shortcut" menu items are greyed out. The keyboard shortcuts (Ctrl+X/Ctrl+C,Ctrl+V) don't work either, and neither does Drag-and-Drop - clicking and dragging a file/folder doesn't change the pointer and letting go doesn't drop the item anywhere. Restarting the explorer.exe process usually fixes the problem, but sometimes I need to do a full reboot. Note: this may be related to the following existing question, although that's Vista and it sounds like the asker has more permanent problems than I do: http://superuser.com/questions/27045/copy-paste-in-vista-explorer-broken-not-ms-vpc So, any idea what might be causing the problem and how to avoid it?

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • Powershell Foreach-Object with if statement not working - help!

    - by Dmart
    I have a batch file that takes a computer name as its argument and will install Java JRE on it remotely. Now, Im trying to run this Powershell script to repeatedly call the batch file and install Java on any system that it finds without the latest version. It seems to run error-free, but the statements inside the if code block never seem to run - even when the if conditional test evaluates to true. Can anyone look at this script and point out what I'm possibly missing? I'm using the Quest AD cmdlets, and BSOnPosh module. Thank you. get-qadcomputer -sizelimit 0 -name mypc* -searchroot 'OU=MyComputers,DC=MyDomain,DC=lcl'| test-host -property name |ForEach-Object -process { $targnm = $_.name $tststr=reg query "\\$targnm\HKLM\SOFTWARE\JavaSoft\Java Runtime Environment" /v Java6FamilyVersion if(-not ($tststr | select-string -SimpleMatch '1.6.0_20')) { $mssg="Updating to JRE 6u20 on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append cmd /c \\server\apps\java\installjreremote.cmd $targnm } else { $mssg ="JRE 6u20 found on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append } }

    Read the article

  • Prevent member of administrator group loging in via Remote Desktop

    - by Chris J
    In order to support some build processes on our Server 2003 development servers, we require a common user account that has administrative privs. Unfortuantly, this also means that anyone that knows the password can also gain admin privs on a server. Assume that trying to keep the password secret is a failed exercise. Developers that need admin privs already have admin privs so should be able to log in as themselves. So the question is a simple one: is there anything I can configure to prevent people (ab)using the account to gain administrator on servers they shouldn't have administrator on? I'm aware that devs could disable anything that is put in place, but that's then down to process and auditing to track and manage. I don't mind where or how: it can be via the local security policy, group policy, a batch file executed in the user's profile, or something else.

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • PHP-FPM runs PHP scripts as root

    - by fwalch
    I have a web server setup using nginx and PHP-FPM listening on a Unix socket. In my php-fpm.conf, I have specified user = www group = www When I run ps aux, I can see that the php-fpm worker processes run as www; the php-fpm master process runs as root. However, I noticed that PHP scripts are executed as root; at least that's the output of echo get_current_user(); What can I do to run scripts as the www user? How can this even happen if the worker processes run as www?

    Read the article

  • How to cut the line between quality and time?

    - by m3th0dman
    On one hand, I have been taught by various software engineering books ([1] as example) that my job as a programmer is to make the best possible software: great design, flexibility, to be easily maintained etc. One the other hand although I realize that I actually write software for money and not for entertainment, although is very nice to write good code and plan ahead and refactor after writing and ... I wonder if it is always best for the business (after all we should be responsible). Is the business always benefiting from a best code? Maybe I'm over-engineering something, and it's not always useful? So how should I know when to stop in the process to achieving the best possible code? I am sure that experience is something that makes a difference here, but I believe this cannot be the only answer. [1] Uncle Bob's in Clean Code says at page 6 about the fact that: They [managers] may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >