Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 930/1981 | < Previous Page | 926 927 928 929 930 931 932 933 934 935 936 937  | Next Page >

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • How should I synchronize configurations and data across computers?

    - by lfaraone
    Imagine I have three Ubuntu computers home, laptop, beach-house. They all have the same version of Ubuntu, 10.04 installed, and are kept up to date from the repositories. I use f-spot, thunderbird, and google-chrome on all of the computers. Is there a way to keep the data and configuration in sync across them, without requiring constant connectivity for normal (non-synchronous) usage? For example, they should be usable without network connectivity, so something like NFS won't work. An ideal solution would not require manual action to start the syncing process.

    Read the article

  • Macports install of ack doesn't create correct executable

    - by user1664196
    I am trying to install p5-app-ack port from Mac Ports, but it seems it doesn't create a /opt/local/bin/ack binary at the end: $ sudo port search *app-ack Password: p5-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.8-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.10-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.12-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.14-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.16-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories Found 6 ports. $ perl --version This is perl 5, version 12, subversion 4 (v5.12.4) built for darwin-thread-multi-2level Copyright 1987-2010, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. $ sudo port install p5-app-ack ---> Computing dependencies for p5-app-ack ---> Cleaning p5-app-ack ---> Updating database of binaries: 100.0% ---> Scanning binaries for linking errors: 35.0% ---> No broken files found. $ $ ls /opt/local/bin/ac* /opt/local/bin/ack-5.12 /opt/local/bin/aclocal /opt/local/bin/aclocal-1.12 /opt/local/bin/activation-client /opt/local/bin/acyclic $ which ack $ ack -bash: ack: command not found Update If I then try to install p5.12-app-ack afterwards, I get $ sudo port install p5.12-app-ack Password: ---> Computing dependencies for p5.12-app-ack ---> Cleaning p5.12-app-ack ---> Scanning binaries for linking errors: 100.0% ---> No broken files found. $

    Read the article

  • Computer freezing while watching Flash videos from net

    - by t3st
    I have a Windows 7 home basic, while watching videos from net (within 5 minutes) computer starts to freeze and shows 100 percent CPU usage. I first thought it's a browser issue but watching videos from different browsers also has the same issue. My system runs the latest Firefox browser and all my plugins (including Flash) is up-to-date. After this when I shutdown/restart the computer it will go to the login window with out any problem. From there when I tried to log in to any account, the system starts to freeze and again I have to start and run Windows in safe mode (which doesn't show any problem). I read it in an article to do these steps CMD->sfc/scannow chkdsk after that only my system works normally, even now I can't watch any videos on the net otherwise it starts freezing( I can watch downloaded videos in computer without a problem) and have to do the whole process once more (which takes a lot of time). while running sfc/scannow its showed results that some of the files are corrupted and it could not be repaired. Can this be the cause for freezing of my computer while running Flash videos? or is it a hardware related problem? What different steps do I have to take to correct those corrupt files? System restore works only some times.

    Read the article

  • User permissions linux. (proftpd / nginx)

    - by user55745
    I've been having a complete nightmare trying to configure proftpd. I've got proftp server working with an sql database. However I want to have any files uploaded able to viewed by the webserver running on the same box. The folders get created in /var/tmp/ as rwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:35 50730c4346512 drwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:38 50730f3a811ca I've tried adding www-data to group with the following usermod -g www-data ftpuser But this doesn't allow the web server access. In proftpd.conf I have the following umask set Umask 0022 It doesn't seem to make a difference what I set that value to. /etc/group (sure I've messed up one of these two but I'm getting desperate) ftpgroup:x:2001:www-data www-data:x:33:ftpgroup /etc/passwd www-data:x:33:33:www-data:/var/www:/bin/sh proftpd:x:108:65534::/var/run/proftpd:/bin/false ftp:x:109:65534::/srv/ftp:/bin/false ftpuser:x:2001:33:proftpd user www-data:/bin/null:/bin/false The ftpuser table in the database has uid / gid set to 2oo1 for both. I'm going absolutely crazy trying to solve this any help would be greatly appreciated. p.s Also, although if I manually connect to the ftp server I can upload files via FileZilla. Although this isn't working for the web-camera, although there is talky talky going on between the server and the camera.

    Read the article

  • Activate swap by default

    - by San
    I installed Ubuntu 11.04 Natty and I set a partition for swap about 900 MB. Afterthat, I installed Kubuntu 12.10 Quantal, repartitioned my hard disk so I had 2048 MB swap (replaced 900 MB swap partition). I ran Kubuntu, and it's ok. But after I ran Ubuntu 11.04 Natty, It didn't use that swap. But I can activate it with Gparted. Some additional information. When I installed Kubuntu Quantal. I make 256 MB partition (ext4 mount point in /boot) which replaced previous 256 MB partition (ext4 mount point in /boot) that I created when I installed Natty. Something wrong with my configuration?.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • New Dash features and Online accounts missing after 12.04 to 12.10 upgrade

    - by motobói
    I performed upgrade to 12.10 from 12.04 using update-manager. Unfortunately, there was some error, because when I came back from the coffee, the screen was black. I opened a terminal (Ctrl+Alt+T) and killall dpkg, which seemed to be waiting for user input for configuration file update (xdg package , if I remember well). After that, I did a do-release-upgrade, which seemed to work well, because I ended on a graphic session after reboot. The problem is that some 12.10 features are missing, as Online Accounts and Dash new online results. This made me suspicious of missing packages or something like that. Please take a look at upgrade logs and my new dpkg --get-selections output: https://gist.github.com/3919006 dpkg --reconfigure -a didn't solved the problem nor apt-get -f install showed any problem. do-release-upgrade say my system need no news packages (even if I change /etc/lsb-release to 12.04) If someone give me a dpkg --get-selections of a vanilla 12.10 installation, may be I can force system reconfiguration.

    Read the article

  • Help me start my Atheros AR9285 working on Ubuntu 12.10

    - by user100449
    Could you, pls, help me with my wireless card Atheros AR9285 on Ubuntu 10.12.. I've already went through all possible advices and still cannot start my wireless card. I have a laptop Toshiba Portege Z830 where Wifi've already worked under Windows 7. But after migration on Ubuntu 10.12. I'm not able get it work. My actual situation is on image bellow This is what I see on command lshw *-network UNCLAIMED description: Network controller product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0500000-c050ffff This is what I see on command rfkill list 0: Toshiba Bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: yes Hard blocked: no Any idea? Thank you Michal

    Read the article

  • How to locate phpmyadmin on ubuntu

    - by Chris
    Okay, I'm usually a windows user and I write quite happily there, unfortunately (or fortunately) I have installed linux on a dual boot and having installed some software I have a question... Where is it? I installed Apache, PHP, MySQL and separately phpmyadmin, Apache is up and running, I've seen my phpinfo page and MySQL is there. MySQL is telling me that there's a database for phpmyadmin, but...erm.. I can't seem to locate it. On a windows machine the directory would be in the www directory and I'd just navigate there... localhost/phpmyadmin/ but on Ubuntu I can't find it in the equivalent. I've been to /var/www/ and there's my index.html (from apache) and my phptest.php file but no phpmyadmin. There is a phpmyadmin in /lib but that only has 2 files in it. So having rambled lots, my question is, what do I have to do to be able to navigate to the phpmyadmin index page? I realise this could fall under the description of a server related question and should be posted elsewhere but as it's software on a home system some help would be appreciated. Do I need to move some files from somewhere? Help! I really don't want to have to go back to developing on Windows as I'll be deploying to a lamp system, my learning curve will be steep.

    Read the article

  • Independent sound device for headphones

    - by amfcosta
    I have an Asus K52Jc and in sound configuration there is no independent sound device for the headphones, and so there's no way to have independent volume for speakers and headphones. Is there a way to have independent devices? Or is this hardware specific? lshw reports that I have an "Intel 5 Series/3400 Series Chipset High Definition Audio". aplay -l reports: placa 0: Intel [HDA Intel], dispositivo 0: CONEXANT Analog [CONEXANT Analog] Subdispositivos: 1/1 Subdispositivo #0: subdevice #0 placa 0: Intel [HDA Intel], dispositivo 3: HDMI 0 [HDMI 0] Subdispositivos: 1/1 Subdispositivo #0: subdevice #0

    Read the article

  • Make Ubuntu fonts sharper

    - by Tibo
    In the last two months I'm trying to do a complete transition from Windows 7 to Ubuntu. There is something that is really missing. I really like the sharp and think fonts in windows (I'm not talking about the font type - 'Arial','Consolas' atc.). I think that the Ubuntu fonts looks better, but after several hours I feel like my eyes are really tired. BTW - I have the same problem with the Apple computer at work too (MacBook pro). Is it a theme issue? Can i change it by configuration? Can you recommend a solution?

    Read the article

  • Can SSL Wildcards have multiple/nested levels of wildcard?

    - by Don Faulkner
    I know that an SSL wildcard certificate (*.example.org) can be used to support many names under the domain (a.example.org, b.example.org, c.example.org). I also know that the * is only good for matching a single level of name. That is, *.example.org will not work on a.b.example.org. What if I used a certificate with the name ..example.org? I'd like to build a certificate with the following name configuration: CN=example.org subjectAltName=DNS:example.org, DNS:*.example.org, DNS:*.*.example.org, DNS:*.*.*.example.org I've tried building a few like this as self-signed certificates, but I've not had good results. For example, chrome tells me "Server's certificate does not match the URL." Is it possible to have nested wildcards in a certificate, or do the popular browsers not support this?

    Read the article

  • What does private cloud Daas or DBaaS really mean ?

    - by llaszews
    Just had meeting with Fortune 1000 company regarding their private DBaaS or DaaS offering. Interesting to see what DBaaS really means to them: 1. Automated Database provisioning - Being able to 'one button' provision databases and database objects. This includings creating the database instance, creating database objects, network configuration and security provisioning. It is estimated that just being able to provision a new DB table in automated fashion will reduce time required to create a new DB table from 60 hours down to 8 hours. 2. Virtualization and blades - DBaaS infrastructure is all based upon VMs and blades. 3. Consolidation of database vendors - Moving from over ten database vendors down to three.

    Read the article

  • JSR updates - October 2013

    - by Heather VanCura
    A handful of JSRs have been making  progress in the JCP program--Java SE, Java ME and Java EE JSRs.  More to come in the next few weeks! Highlights and links to JSR material below. JSR 337,  Java SE 8 Release Contents, published an Early Draft Review. JSR 351, Java Identity API, published an Early Draft Review. JSR 360, Connected Limited Device Configuration (CLDC) 8, passed the EC Public Review Ballot with 21 yes votes. JSR 361, Java ME Embedded Profile, passed the EC Public Review Ballot with 20 yes votes. JSR 107, JCACHE-Java Temporary Caching API, published an update to their JSR Community Update Page.  You can find schedule information (plans to submit Proposed Final Draft very soon), Adopt-a-JSR suggestions, and presentation material from JavaOne.

    Read the article

  • W7-pro indexing mydoc on disk partition does not work

    - by Yvan Thery
    I am working on a HP-7100 mini tower running W7 Pro 64bits. My Local HD includes C:/ + 2 disk partitions : all my documents are located on disk partition L:/ and all my media files are on disk partition M:/ The indexing process works well on C:/ and M:/ but does not index the L:/ any more also all of them are allowed to be indexed, also the system is present on all drive security tabs. I have tested to rebuilt the indexing file with a new setting including few directories present on drive C/M/L but still with L: does not work ! One more thing I can tell you is that even after rebuilding the indexing file, I can find some residual directories or files which are out of the test selection. It is like unerased components remaining in the indexing database. As I do not know precisely how the indexing process works it is hard to know what to do ... Recently I had a bad time after using a past restoration procedure ... maybe it did corrupt the indexing file ???? If I start indexing the all L:/ disk partition the system stop at 39 found index also many more are existing .... Does any one from you guys could advise the process to create a new indexing database ... ? Any idea to get out of this mess ? Many thanks for assistance Yvan

    Read the article

  • Where do I set my SPF record?

    - by Misha
    Many years ago we purchased a domain from Yahoo. Now our website is hosted on Amazon EC2. The output of an SPF checking tool (http://www.kitterman.com/getspf2.py) says SPF records are primarily published in DNS as TXT records. The TXT records found for your domain are: i=182&m=bizmail-mx2-p9 SPF records should also be published in DNS as type SPF records. No type SPF records found. Checking to see if there is a valid SPF record. No valid SPF record found of either type TXT or type SPF. Where do I get access to these values? Can somebody speculate, where can I find an interface, or a configuration file to fill in the missing fields?

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • SSH Lost Terminal Colors

    - by memecs
    I have to computers with exactly the same configuration (same PS1 etc...). When I ssh from A to B the terminal correctly displays PS1 and file type colors (i.e. blue directories, green executables etc.) But when I ssh from B to A PS1 is set to default and colors disappear. Furthermore, I created public keys to ssh without password from A to B and vice-versa. It works correctly from A to B but it doesn't work from B to A, again I repeated the exact same procedure on both pc: On Host A ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub address.to.host.B On Host B ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub address.to.host.A What could be the problem?

    Read the article

  • Rsyslog problem after ubuntu upgrade 10.4 to 12.4

    - by Oxymoron
    I was using Ubuntu 10.4 until last week for storing the log informations of a external device with rsyslog. After upgrading to ubuntu 12.4 the logging of TCP doesn't works anymore. (There are just no pakets visible - not even with tcpdump - aold ubuntu machine still sees the pakets.) UDP works with the identical configuration on the ubuntu machine and a "use UDP" on the external device. Are there any changes in rsyslog, that could explain this? My rsyslog.conf file looks like this (with more comments): $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) #$ModLoad immark # provides --MARK-- message capability $KLogPath /proc/kmsg # provides UDP syslog reception $ModLoad imudp $UDPServerRun 514 # provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # Set the default permissions for all log files. # $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup syslog if $fromhost-ip startswith '192.168.0.10' then /var/log/caliDevice.log & ~ # local/regular rules, like '.' /var/log/syslog.log $IncludeConfig /etc/rsyslog.d/*.conf

    Read the article

  • upgrade from 11.10 to 12.04 killed my network connectivity

    - by Daniel
    I have a wired network connection that worked fine in version 11.10. I upgraded to 12.04 and immediately after the upgrade was completed, the OS reported my "cable unplugged". It is not unplugged and it is not defective. I have a D-link DFE-530TXS 10/100 ethernet NIC and I see what seems to be the generic 10050 driver loaded. Is there any way to just flush anything and everything to do with the network configuration and have Ubuntu reset/find everything again? If not...is there any way I can get it to realize that my network cable is not unplugged? (considering it worked mere minutes before). Thanks.

    Read the article

  • .htaccess redirect to subfolder in different domain, maintaining old domain in the URL

    - by Naoise Golden
    Redirect has been widely discussed and most problems solved, so I am sorry for opening yet another post about this, but none of the codes I am trying work. I have a WordPress site hosted in http://mydomain.com/clientsdomain.com/wordpress I would like to temporarily redirect http://clientsdomain.com/ to the abovementioned URL, maintaining the clientsdomain.com domain in the URL. So for example http://clientsdomain.com/some/page would be pointing to http://mydomain.com/clientsdomain.com/wordpress/some/page Is this even possible with .htaccess? Maybe som configuration or plugin option with WordPress?

    Read the article

  • choppy streaming audio

    - by user88503
    I could use some help troubleshooting choppy streaming audio. The problem is jerky playback regardless of audio or video with audio. Both Chromium and Firefox have the problem, however files played directly on the machine with Rhythmbox sound just fine. I'm running 12.04 LTS on a C2D T9300. Most of the audio problems others ask about seem to be hardware related, so the following information might be relevant. sudo lshw -c multimedia *-multimedia description: Audio device product: 82801H (ICH8 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:48 memory:f8400000-f8403fff

    Read the article

  • DHCP won't start / subnetting

    - by user114371
    I recently changed the IP address on an Ubuntu 12.04 server I have in my lab, which is running isc-dhcp-server. After doing so and modifying the dhcpd.conf file, my dhcp service would not start. I basically used the same configuration, except I modified everything to use /25 scopes rather than /24. When I try to start / restart the service, I see the following: MY@ubuntuserver:~$ sudo service isc-dhcp-server restart stop: Unknown instance: isc-dhcp-server start/running, process 20918 It looks like it starts, but it isn't actually running and Webmin states that the DHCP service is not running. So my question is, does isc-dhcp-server support subnetting (CIDR) style scopes, or must they be class A / B / C scopes (doesn't seem likely)? I've double checked the interface reference (this is a VM with only one defined eth0 interface) and everything else I can thing of.

    Read the article

  • GlassFish 4.0 Virtualization Progress - VirtualBox

    - by alexismp
    Wouldn't it be nice if you could spawn GlassFish instances as VirtualBox virtual machines? Well now with early versions of GlassFish 4.0 you can! This page on the GlassFish Wiki documents the steps to get this to work. It walks you through the various VirtualBox (network and services) and GlassFish configuration steps including the creation of VDI templates (typically JeOS images) to finally create a virtual machine on the fly, as part of the typical GlassFish deployment process. The more general virtualization support in GlassFish is discussed in this other Wiki page. Earlier demonstrations of GlassFish.next prototypes or early milestone builds showed support for KVM, "laptop mode" and OVM as well as community involvement from Serli, speaking of which this slide-deck is a good summary of what we're trying to achieve in the GlassFish 4.0 IMS (IaaS Management Service).

    Read the article

< Previous Page | 926 927 928 929 930 931 932 933 934 935 936 937  | Next Page >