Search Results

Search found 4971 results on 199 pages for 'mu mind'.

Page 133/199 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Win7 System folder contains infinitely looping SYSTEM(!) directory

    - by Matt
    My Windows 7 Enterprise computer has been crashing fairly frequently recently, so I decided to boot up in safe mode and run the TrendMicro client I have installed. It froze about 10 minutes into the full system scan, so in the spirit of http://whathaveyoutried.com, I started scanning each folder individually. When I got to ProgramData, the AV failed with an uncaught exception. I then went down a level and tried scanning Application Data, which failed as well. Imagine my surprise when I open the folder just to see the same folder again! As far as I can tell, this folder loop continues indefinitely. (If you are trying to recreate this, keep in mind that ProgramData is a hidden folder.) I'm actually a bit concerned that these are system folders, as this is a brand-new computer with a clean installation. I guess I have three questions: Has anyone else seen/experienced this before? I'm running Win7 SP1. How do I fix this? I've run CHKDSK \F with no success (although it was incredibly slow). What are the ramifications of an infinitely recursive directory? Theoretically speaking, each link takes up memory, so shouldn't I have no space available on my hard drive? (I've got about 180GB left.) I noticed that the tree view on the left only shows the "linked folder" icon on the deeper folders--does this mean anything special? (I've circled the icons or lack thereof in red.) How can the OS even resolve this aberration? And above all, what would happen if I were to select "Expand all folders"??? :P Matt

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Filesharing mac 10.6 with windows vista

    - by adam
    Ive followed all the tutorials on the net to no avail. I can see my vista pc in finder on the mac but when I click on it tries to connect but fails. The same is true from vista, it can see the mac but i cant connect. Wont even offer a login box. So I tried to troubleshoot by using a 3rd computer with xp. It joined the network and can access both the mac and vista on the same workgroup. Mac sees it and it shows under finder but i cant access it by clicking on the icon i have to access by using the ip of the xp machine...strange. Ive turned off all my firewalls. The fact that xp can connnet to the others and vice versa boggles my mind why vista and mac cant connect directly. Oh and pings indicates the same. I.e. xp has no problem but mac to vista and vice versa fails. Can anyone help.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • What kinds of protections against viruses does Linux provide out of the box for the average user?

    - by ChocoDeveloper
    I know others have asked this, but I have other questions related to this. In particular, I'm concerned about the damage that the virus can do the user itself (his files), not the OS in general nor other users of the same machine. This question came to my mind because of that ransomware virus that is encrypting machines all over the world, and then asking the user to send a payment in Bitcoin if he wants to recover his files. I have already received and opened the email that is supposed to contain the virus, so I guess I didn't do that bad because nothing happened. But would I have survived if I opened the attachment and it was aimed at Linux users? I guess not. One of the advantages is that files are not executable by default right after downloading them. Is that just a bad default in Windows and could be fixed with a proper configuration? As a Linux user, I thought my machine was pretty secure by default, and I was even told that I shouldn't bother installing an antivirus. But I have read some people saying that the most important (or only?) difference is that Linux is just less popular, so almost no one writes viruses for it. Is that right? What else can I do to be safe from this kind of ransomware virus? Not automatically executing random files from unknown sources seems to be more than enough, but is it? I can't think of many other things a user can do to protect his own files (not the OS, not other users), because he has full permissions on them.

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • How do I find my missing songs from last.fm?

    - by duality_
    My disk failed with all my music with it, lots of them. But luckily, I scrobbled every song to last.fm. I am looking for a way to scan my disk for my songs and check last.fm and tell me which songs are missing from my disk that are present on last.fm. So to recap: I would need to log into my last.fm account and compile a list of all the songs I have scrobbled and then scan my computer for missing songs. Is there a program or script that does this? I don't mind it being a shell script even. Edit: I know this is possible because I came close to this a little time ago. I created a PHP script (web page) that got all my songs through Last.fm API and then went through my files on disk and read their id3 tag. I got very close: the program showed missing songs, but there were many small issues (id3 reading was buggy, tags had different data, etc.) that required more programming time that I didn't have.

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • Rate limiting bandwidth per IP

    - by Yohan
    First, I am not that good with computer. I even had problem with Windows PC. Right now I own a restaurant which happened to offer free internet. My ISP has my connection setup using a Ubuntu 11.1 box. IP Address is 192.168.1.16 with netmask 255.255.0.0, dns is 192.168.1.1 and gateway is 192.168.1.1. My problem is that my customers complains all day about slow network. When I received that kind of complain, the first thing came to my mind is to scout my area and find out who is the culprit, and ask him not to waste our bandwidth. Now, it is getting bored scouting people around, and I need to implement to my Linux box to limit bandwidth. I don't care if their provider can't be faster, but I want to limit 70kbit for each person. More annoying are people who use flashget and torrents. Usually they consume the biggest bandwidth. My question, how can I limit that? Please guide me in easy way. I've spent few days reading tc documents but doesn't understand a thing. I am using Ubuntu 11.10 Basically, I want all my customer get 70kbps each, no matter what.

    Read the article

  • Remote paging with Nagios when network is down and email won't work -- cellular modems and alternatives

    - by Quinten
    What is the best option for remote paging when network services are down? I'm looking for a solution that can let me know when network services are down during off-hours only, and especially when email/smtp services are out. Therefore, it needs to be redundant to our network and power supply. I'm imagining a cellular modem is one option. What's the price range for these? Is anybody using them and feel that they are worth the cost? I'm imagining that it's something we would end up sending an emergency page ~ 1x/month at most, so I'd like the pricing to reflect that--I don't mind a high per-page cost as long as it has a low recurring cost. Another option would be to expose at least one server to remote ping, and run a check script on a remote server. Are there paid options for this? Currently, we run Nagios on a Linux VM on a Windows 2008 Hyper-V host. It would be great if the solution would work in that environment, but I know it's tricky with external devices, and we could move Nagios to a standalone workstation if needed.

    Read the article

  • split virtualization design based on environment or server role?

    - by Dan
    I'm setting up the server environment for a new software development group, which will include 4 test environments. These are web applications, so each environment will have an application server and a database server. I'm planning on buying two physical servers (e.g. 6-core CPU each with 12GB or so of RAM), and I'm thinking virtualization is appropriate here. With that in mind, I've thought of a couple ways that I could organize the virtualization strategy: - Separated by server role: Server 1 has all the application servers, each in their own guest VM. Server 2 has all the databases. OR - Separated by environment: Server 1 has a VM for two of the environments, with the VM containing both the app server and the database server. Server 2 would also contain two test environments, with the same style (app server and database in same VM). The advantages I see with all the app servers on one server and all the databases on another server is that I could probably be more efficient with the database server (one instance running multiple databases). But the other option seems easier to manage (archives/restorations would be contained in a single VM). Any recommendations? TIA.

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Nginx rewrite for link shortener + Wordpress pretty URLs

    - by detusueno
    Okay so I installed Nginx/PHP/MySQL/Wordpress via a online walk through, and it had me enter these rewrites to enable Wordpress pretty URLs: if (-f $request_filename) { break; } if (-d $request_filename) { break; } rewrite ^(.+)$ /index.php?q=$1 last; error_page 404 = //index.php?q=$uri; This is then included in the vhost for my domain. What I'm trying to do now is add some redirection/link shortner rewrites that will play nice with the setup I have in mind. I'd like to redirect "x.com/y" to "x.com/script.php?id=y" for all external links that I post. The Wordpress link setup right now has almost all internal links begin with "news" (x.com/news/post-blah, x.com/news/category/1, etc) BUT I also have a few root links that point to some internal content (x.com/news, x.com/start). I'm guessing that's going to cause some conflicts. What's the best approach to do this? I've never worked with Nginx (or any rewrite rules) but maybe I can distinguish between "x.com/news" and "x.com/news/" to allow it to play nice? I had a friend setup a working version of this in Apache and it'd be nice if I could get this up on Nginx again.

    Read the article

  • Installing Linux on a Windows 8.1 laptop

    - by nicoX
    I would like to clean install a linux distribution as Ubuntu etc. My laptop that runs Windows 8.1. I have two options in mind. Clean install or dual boot. My technical question is: my laptop have a 8GB SSD drive, which it uses to boot Windows with and a 500GB for storage. I wonder what that 8GB SSD stores? It can't store the whole Windows install as that would be much more than 8GB. Also if I would do a clean install of Ubuntu could I use the 8GB SSD to have Ubuntu boot up quicker. How would I install it. Option two, if I would like to dual boot, how would I proceed having the SSD to boot both systems? I also wish to ask about the Legacy and UEFI differences. Windows runs with UEFI. So when I'm installing Linux, should I run Legacy, and if I dual boot, what option to I choose?

    Read the article

  • Flow of packet in network

    - by user58859
    I can't visualize in my mind the network traffic flow. eg. If there are 15 pc's in a LAN. When packet goes from router to local LAN, do it passes all the computers? Means did it goes to ehernet card of every computer and those computers accept the packet based on their physical address. To which pc the packet will go first? To the nearest to the router? What happen if that first pc captures that packet(though it is not for it)? What happens when a pc broadcast a message? Do it have to generate 14 packets for all the pc's or only one packet reach to all pc's? If it is one packet and captured by first pc, how other pc's can get that? I can't imagine how this traffic is exactly flows? May be my analogy is completely wrong. Can anybody explain me this? Thanks in advance.

    Read the article

  • USB transfer speed for Windows 7 is incredibly slow to my external drive

    - by Wolfram
    I'm running Windows 7 Pro and am try to backup 116 GB of data to my external 1 TB hard drive. My laptop has only USB 2.0 ports and my hard drive is USB 3.0 compatible, as is the cable I'm using. I understand that the transfer speed should still be in accordance with USB 2.0 speeds. However, right now I'm getting 135 KB/s and it's been gradually dropping. For an earlier transfer, I would get between 4 MB/s to 8 MB/s. So, I'm really just wondering what's going on with my transfer rate and what I can do to improve it. I'm currently about 35 GB into the 116 GB transfer. Another strange thing is that the window which shows the transfer status decided to max out at 835 MB, and therefore shows items remaining as 0. However, it is still performing the rest of the transfer, and I can see it still cycling through files. Now that I think about it, it seems plausible that the speed being shown by the window is calculated merely as total data transferred / time elapsed. Since the "counter" of data, as far as what is being displayed in the window, maxed out at 835 MB, as time increases, the speed shown is going to keep decreasing because the 'total data transferred' value isn't being incremented. So with that in mind, I suppose I don't actually know at what rate the data is being transferred currently. Nonetheless, my best speed earlier was only around 8 MB/s. Shouldn't USB 2.0 deliver closer to 35 MB/s? Also, if someone can tell me why the transfer status window is displaying the incorrect data information and how to fix this, that would also be appreciated.

    Read the article

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • mkvmerge: How to merge two videos, one without audio?

    - by ProGNOMmers
    I have two videos, one without audio (the second). Trying to merge them I have this error: mkvmerge concat1.webm +concat2.webm -o output.webm mkvmerge v5.8.0 ('No Sleep / Pillow') built on Oct 19 2012 13:07:37 Automatically enabling WebM compliance mode due to output file name extension. 'concat1.webm': Using the demultiplexer for the format 'Matroska'. concat2.webm': Using the demultiplexer for the format 'Matroska'. 'concat1.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 1: Using the output module for the format 'Vorbis'. No append mapping was given for the file no. 1 (concat2.webm'). A default mapping of 1:0:0:0,1:1:0:1 will be used instead. Please keep that in mind if mkvmerge aborts with an error message regarding invalid '--append-to' options. Error: The file no. 0 ('concat1.webm') does not contain a track with the ID 1, or that track is not to be copied. Therefore no track can be appended to it. The argument for '--append-to' was invalid. Is there a way to say to mkvmerge to make the audio track longer? Thank you!

    Read the article

  • Why is dwm.exe using so much memory?

    - by Leonard Challis
    I've scoured the web, but I'm sick of reading "scan your computer for viruses" and "upgrade your RAM" on answers to similar questions to this. I understand that dwm.exe is for (simply put) caching bitmaps for things like Aero-peek and similar, but as far as I have read it shouldn't be using vast amounts of memory. My colleague and I both have 4GB of RAM, Core 2 Duo, blah, blah -- essentially they're pretty capable. His dwm.exe is running at around 30mb, mind is currently running at about half a gig, though it does fluctuate quite a lot. This is the same while running the exact same applications (currently Zend studio, FireFox (with firemin - low memory usage), Outlook). Every so often I will get a notification asking me if I want to switch to Aero Basic because it's using too much memory, and sometimes it will just switch itself to basic and let me know why. I know it's possible to stop it switching, but I want to know why it is using too much memory otherwise it's just papering over the cracks. One thing to add is this seems to have started after a robbery on Monday, where two of my monitors were stolen, and I had to temporarily use a couple of alternative monitors. I am now using brand new monitors but the problem is the same. All drivers installed and working seemingly fine. Any ideas why the usage is so high? We are using windows 7 64-bit Professional.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >