Search Results

Search found 28301 results on 1133 pages for 'external process'.

Page 740/1133 | < Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >

  • Airpot Express configuration

    - by Christina
    We are trying to set up remote access to a computer that houses a server fro a particular program we are running. The program says we need to configure the office router. In the firewall settings it says to open ports 5345-5351 (TCP only). Port Forwarding: You will also need to forward the same range of ports (5345-5351) to the computer running the Server. This typically requires that the computer running the Server be assigned a static IP on the local network. Having trouble figuring out which IP address we actually need to be using on the client side of this program in order to access the server computer. Can someone walk through this process?? We are working on Mac OSX 10.5. Thank you in advance!

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • What is a good web interface for remote linux load monitoring?

    - by Jakobud
    I'm looking for some type of remote linux monitoring software that you can view using a web interface. And I'm not just looking for the basic load information. I'm also looking for process information, similar to the info that you get from TOP. Like I'd just like to be able to pop open this webpage to view whats going on with the server at a moments notice. For example, perhaps just a basic PHP page that is on the server that uses basic AJAX to display and refresh results from the TOP command in the page. I was thinking about writing something like this, but I don't want to reinvent the wheel.

    Read the article

  • Puppet exported resource naming

    - by Tim Brigham
    I am working on setting up a collection of Splunk nodes to be deployed by Puppet. One of the steps in this process is importing the trusts to allow these nodes to automatically find each other. I've looked over several options and it appears that exported resources are the only ready way to go for this to work. The files I need to create are under /opt/splunk/etc/auth/distServerKeys//trusted.pem. The source for each of these files should be /opt/splunk/etc/auth/distServerKeys/trusted.pem, one per node. What syntax do I need to make this work? The samples I've looked at all appear to have the same source and destination file name.

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • Bad IIS 7.5 performance on webserver

    - by Robert P.
    I have a webpage (ASP.NET 4.0 / MVC 4). On my development machine (i5-2500 3.3 8GB Win7 VS2010 SP1 Fujitsu Esprimo P700) the page performs with 160 requests/sec on devenv webserver on my machine. The page performs with 250 requests/sec on my local IIS 7.5. (uncompiled web) The page performs with 20 requests per second on a 16core 32gb ram production server (Fujitsu RX-300 w2k8 rc2 IIS 7.5). (compiled web) Why? I think it's the IIS configuration but i can't figure out whats the problem. The page runs with 1 worker process on both machines. Web garden is not an option (it helps but the app isnt compatible with)

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • Automate opening HTML and printing to PDF

    - by craigpatik
    I need a way to automate the following process in Windows 7: Open an .html file in Internet Explorer Print to PDF Save the PDF with a patterned file name (i.e., original_name_YYYY-MM-DD.pdf) Ideally, I could drag and drop several files or open a whole folder of files at once and a PDF would be created for each one. A command line solution is also acceptable. The files have to be opened in the browser because parts of the page are rendered with JavaScript on page load. In other words, if you simply right-click on the file in Explorer and choose "print", the resulting file is not the same because the JS didn't run. If it helps, Internet Explorer can be set as the default browser, and a PDF printer can be set as the default printer.

    Read the article

  • Emacs stops taking input when a file has changed on disk [migrated]

    - by recf
    I'm using Emacs v24.3.1 on Windows 8. I had a file change on disk while I had an Emacs buffer open with that file. As soon as I attempt to make a change to the buffer, a message appears in the minibuffer. Fileblah.txt changed on disk; really edit the buffer? (y, n, r or C-h) I would expect to be able to hit r to have it reload the disk version of the file, but nothing happens. Emacs completely stops responding to input. None of the listed keys work, nor do any other keys as far as I can tell. I can't C-g out of the minibuffer. Alt-F4 doesn't work, not does Close window from the task bar. I have to kill the process from task manager. Anyone have any idea what I'm doing wrong here? In cases it's various modes not playing nice with each other, for reference, my init.el is here. Nothing complex. Here's the breakdown: better-defaults (ido-mode, remove menu-bar, uniquify buffer `forward, saveplace) recentf-mode custom frame title visual-line-mode require final newline and delete trailing whitespace on save Markdown mode with auto-mode-alist Flyspell with Aspell backend Powershell mode with auto-mode-alist Ruby auto-mode-alist Puppet mode with auto-mode-alist Feature (Gherkin) mode with auto-mode-alist The specific file was a markdown file with Github-flavored Markdown mode and Flyspell mode enabled.

    Read the article

  • Apache Issues on Windows Server 2008

    - by dlackey
    I'm looking to reinstall Apache 2.2.25 since I continue to get these errors in the Windows Application log every 2-5 minutes: Faulting application name: httpd.exe, version: 2.2.25.0, time stamp: 0x51dd049c Faulting module name: zlib1.dll, version: 1.2.3.0, time stamp: 0x4790446a Exception code: 0xc0000005 Fault offset: 0x00002bad Faulting process id: 0x38e8 Faulting application start time: 0x01cfbfd70cdfbc4f Faulting application path: C:\Apache2\bin\httpd.exe Faulting module path: C:\Apache2\bin\zlib1.dll Report Id: 745f20de-2bca-11e4-bd5d-002590f28d7e If the new install doesn't work or if there are some "issues", can I simply restore the Apache2 directory from backup and then I'll just be back where I started? I thought about just renaming the current install to something like c:\apache2_old and if something fails, I can delete the new install and rename c:\apace2_old back to c:\apache. What do you all think?

    Read the article

  • Lotus notes 8.5 quota

    - by Cividan
    we're using lotus notes 8.5 and I have a user who was over his quota as he had sent 6 email with attachement over 800 MB (no comment...) I deleted these oversized email and empty the trash but domino keep sending email warning about quota. I checked in the all documents view and they are no longer there, I re-did an empty the trash. I saw a post on the internet saying to compact his database, when I go under file, application, properties and click on the info tab, I see that he use 35.7% of the 3 GB database. when I click on "compact" I see a message saying the compact of the database is beeing process... the message disapear after about 1 minutes the message disapear but nothing else seem to happen and when I look back later on the space problem has not changed. any advice would be appreciated.

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • A way to auto cycle (close) through all screen sessions

    - by JBWhitmore
    I frequently use screen when I log into the interactive nodes to a supercomputer that I have access to -- and I often run things and move on. There are about 20 separate nodes that I can log into; and if I check any one of them I'll have something like 4 detached sessions. Each of those sessions will have maybe 5 screen sessions within that. Is there a quick way to cycle through all of these and close them down if they are not running any processes? My current process is to screen -ls and then screen -r #### then type exit until I'm back to the base screen.

    Read the article

  • Automation of software installation - should I ask for text or file?

    - by Denis
    I am preparing a software installation in Windows environment for my application. During installation it asks for Subscriber ID which should be entered into text field. I am wondering if it is a best solution for mass installations. I know that for mass installations IT teams use systems like Microsoft System Center which allow automate deployment. But I do not know much about capabilities of such systems. Can such system automate data entry into the text fields? Will it be better to change installation process and ask not a text but a file which contains Subscriber ID? By the way, I am looking for beta testers for my software. This software let user view Microsoft Project files without having Microsoft Project installed.

    Read the article

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • Improved Customer Experience, but at what Cost? See the DELL Computer experience with RTD

    - by Richard Lefebvre
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits? That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction. What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Video link By Tony Berk on Nov 15, 2012

    Read the article

  • TraceTune supports uploading Zip files

    - by Bill Graziano
    I’ve been using the online version of ClearTrace more and more lately.  When I get to a new client it’s just much easier to upload a trace file rather than install ClearTrace. That means I’ve finally been adding more features to it.  The two latest features are around ease of use. You can now upload a ZIP file that contains a trace file.  Trace files are already somewhat compressed.  Putting it in a ZIP file further compresses it by a factor of 8X or 9X in my testing. That means you can start with a 100MB trace and end up with a 10Mb-12MB ZIP file to upload.  I’m consistently able to get over 150,000 events in a 100MB ZIP file.  That gives me a pretty good look at a system. The second part of this is that files are now processed asynchronously.  After you upload a file you’ll be taken to a processing page that updates every few seconds with the number of rows processed.  It generally takes under a minute to process a 100MB trace file but I *hated* staring at a blank screen. Give TraceTune a try.  It’s getting easier to use every day.

    Read the article

  • Is it safe to change the time on hosting VM server?

    - by hydroparadise
    So, I noticed there's about a 10 minute drift on my VM hosting server from what time it's supposed to be. In traditional environments, I would just restart the system (and change the BIOS time if necassary). The hosting server is Ubuntu 12.04. Undertsanding that some process could be time sensitive (NTP?), I was wondering how this might affect the relation between the host and hosted system (currently hosting 4: 3 Ubuntu 12.04 servers with one being a web server, and 1 Windows Server 2008 file server). I am using Virtual Box 4 with it's headless option. Ultimately, I am trying to avoid from shutting down the host (which ultimately mean shutthing down the other hosted systems). Is this safe?

    Read the article

  • Issue with broken disk on Solaris with raidctl - how to proceed

    - by weismat
    I have a SunFire T2000 server which has 2 mirrored disks pairs. The server required an exchange of the system battery. After swaping the battery first no disks were found. After booting from CD we managed to find the disks, but now one disk is broken and the raidctl reports a failed synchronisation. The boot process stops now when trying to mount the file systems. The power light of the broken drive is not even blinking. What is the best way to proceed now ? Fortunately I could live with loosing the data on the drive as it is backed up, but I would like to keep the rest of the data as it contains /etc and get the server booting again.

    Read the article

  • Acrobat Reader, and indeed all Adobe products are freezing and crashing on print

    - by 5tratus
    Everything was working fine, right up till I had to do some driver work to get my scanner to work - now I can't seem to print from any Adobe product. I click print and the program freezes, it stops responding, and in the case of Acrobat Reader, it crashes. In the case of In-design CS4, I have to stop the process in task manager, in the case of Fireworks CS3 - I think it just crashes. Printing a PDF hangs and crashes inside of Firefox and IE browsers too. My printer works and I can print from MS Word, Excel and directly by right clicking on a non-Adobe file and choosing print. But when I try it in an Adobe product. I'm running Windows 7 64-bit, my version of Adobe Reader is: 10.1.11, Windows is updated, and I don't have any unusual extensions.

    Read the article

  • How to generate new CSRs for TLS use in sendmail?

    - by Mikey B
    SendMail 8.13.8 | CentOS 5.x Hi Guys, I'm using ca-signed TLS certificates on my sendmail server and they are up for renewal soon. Our new CA doesn't like our old CSR so I need to generate a new CSR. Can someone point me to the procedure for doing this (without affecting the production certs that are already in use)? I'm paranoid of overwriting the old TLS certs in the process of generating a CSR. Most of the instructions I've found are for implementing self-signed TLS certs -- which isn't an option for me at this time. I'm thinking it would something like: openssl req -new -nodes -out new-tls.csr -keyout new-tls-private.key But I wasn't sure if I was missing some options there such as the -x509 option... -M

    Read the article

  • Ubuntu 12.04 menu bar, nautilus, terminal, and gtk themes not working after installation of Gimp 2.8

    - by Chris
    I installed gimp2.8 from this ppa: ppa:otto-kesselgulasch/gimp after that, my system began having problems. This is my thought process in trying to fix what's happened and the order in which it happened: I noticed the menu bar at the top changed from an opaque black to perfectly clear and the titles of applications and the hidden buttons reacted slowly. No big deal, restarted to see if it fixed it. It didn't, in fact, when the logon screen came up, the password field was grey and boxy like a default windows 98 theme (that's the best I can describe it) as were all the option buttons for gtk programs. I open terminal to try and reinstall gtk, but the terminal is just a black screen with no ability to input commands. I go to a tty and I reinstalled gtk3 and gtk2 (I have both on my system. I don't think they're in conflict, they hadn't been before hand). I restarted. Nothing doing. Log in, nautilus isn't placing icons on my desktop. I click the launcher. It flashes, but no window opens. Try to open by Alt+f2, nothing. I purge ubuntu-desktop, restart, reinstall ubuntu-desktop. Nothing. I have no clue what to do at this point so I'm asking for any help diagnosing the problem and fixing it.

    Read the article

  • How to multiplex subtitles with avi or mp4 in Linux?

    - by woofeR
    I have been searching something which can multiplex subtitle with video files in Linux environment. The key thing is that it should softly embed the subtitle to video, not encode again. (like avidemux). After this multiplexing process, user should be able to open/close subtitle using VLC for example. While searching that, I found a software which can do exactly what I need, named AVI-Mux GUI in Windows environment. However, I need these software's Linux alternative. Thanks.

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

< Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >