Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 747/1981 | < Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >

  • Getting error while running apt-get update

    - by pradeepchhetri
    I am getting the following error while running apt-get update on all of the servers. W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AA6247928553 W: Some index files failed to download, they have been ignored, or old ones used instead. The solutions available are: To login into each of the server and run the following command to import the gpg key for that repo. sudo gpg --keyserver hkp://subkeys.pgp.net --recv-keys 8B48AA6247928553 sudo gpg --export --armor 8B48AA6247928553 | sudo apt-key add - But this requires logging into all of the individual servers and run the above two commands. I am looking for a way to correct by working on apt-repo server. Is there any way to do that.

    Read the article

  • How can I remove OLD history from Google Chrome?

    - by Norman Ramsey
    I'm working on a laptop with a modest hard drive, and 500MB is taken up with Google Chrome "History Index" and "Thumbnails" files. Some of these files are a year old. Chrome offers me the option to remove recent history, but I want the opposite: I want to remove old history. (Ideally I would remove the least recently used history information, but I don't expect to be able to do that.) Anyone have any ideas? I'm running the standard Debian google-chrome-beta package.

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Ad Agency storage/file server +backup needed (NAS or something else?)

    - by Rob
    Looking for a "this is all you need" recommendation. We're a small ad agency with both mac & pcs that access and share files from a 3 yr old Windows 2000 box (no server software). We currently have 1TB on the "server" and back it up to 2 different Seagate Free Agent Pro 1TB external drives. But we're low on space and are looking for something that's bigger, that we can still access from Mac & PC, EASY backup system, secure from viruses, firewall enabled. Not sure if a NAS will work or if we should have a real server. We don't really get on that box except to restore files, or run Norton on it. I hope I've provided enough for a general recommendation. Thanks. Rob Phx

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • PHP Sessions suddenly not working

    - by styrken
    Out of no where my php sessions does not work anymore. The server have been running fine for several months. I'am running Ubuntu 11.10 (GNU/Linux 3.0.0-14-server x86_64) with nginx/1.0.11 and php 5.3.19-1~dotdeb.0 Session info copied from phpinfo() Session Support enabled Registered save handlers files user memcached Registered serializer handlers php php_binary wddx Directive Local Value Master Value session.auto_start Off Off session.bug_compat_42 Off Off session.bug_compat_warn Off Off session.cache_expire 180 180 session.cache_limiter nocache nocache session.cookie_domain no value no value session.cookie_httponly Off Off session.cookie_lifetime 0 0 session.cookie_path / / session.cookie_secure Off Off session.entropy_file no value no value session.entropy_length 0 0 session.gc_divisor 1000 1000 session.gc_maxlifetime 1440 1440 session.gc_probability 0 0 session.hash_bits_per_character 5 5 session.hash_function 0 0 session.name PHPSESSID PHPSESSID session.referer_check no value no value session.save_handler files files session.save_path /tmp /tmp session.serialize_handler php php session.use_cookies On On session.use_only_cookies On On session.use_trans_sid 0 0 I have setup the following php script to test with: error_reporting(E_ALL); ini_set('display_errors', true); error_log($_SERVER['REMOTE_ADDR'] . ' visited test page'); if(session_start()) echo "Session started <br />"; else echo "Session failed <br />"; echo '<a href="?', time(), '">refresh</a>', "\n"; echo '<pre>'; echo 'session id: ', session_id(), "\n"; $sessionfile = ini_get('session.save_path') . '/' . 'sess_'.session_id(); echo 'session file: ', $sessionfile, ' '; if ( file_exists($sessionfile) ) { echo 'size: ', filesize($sessionfile), "\n"; echo '# ', file_get_contents($sessionfile), ' #'; } else { echo ' does not exist'; } echo PHP_EOL; $_SESSION['number'] = (int) @$_SESSION['number'] + 1; var_dump($_SESSION); echo "</pre>\n"; session_write_close(); echo 'done.'; It tells me that the session file exists, but my session id changes on each refresh.. What is going wrong? There is no output to any error logs at all.. :/ Please help!

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • Backing up data in an encrypted way

    - by Eli Bendersky
    I have the following use case: There's some data from my PC I want to periodically back-up online I own some hosting, so I want to use that for the backups, don't want to pay to another backup service I want to encrypt my data locally prior to moving it to the server I have no problem writing scripts to automate the process (say, periodically generate the backup and upload by FTP to my server), but my main question is about step 3 - the encryption: which way is recommended to encrypt my files (say, collected into a .ZIP) prior to uploading to the server? P.S. TrueCrypt seems popular but it's not quite what I'm looking for, since I don't want the files to be constantly encrypted here on my PC.

    Read the article

  • Run a command from Windows 7 Start Menu

    - by Abhijeet Patel
    I'm trying to run commands such as "cmd.exe", "appwiz.cpl" etc by typing it in the Search box of the Start Menu in Windows 7 (x86). I'm able to do this just fine in Vista. After typing in "cmd" I notice that I see a link to "Programs" in the start menu so it seems that "cmd" is being recognized but when I click on "Programs" link which is shown,I get the following message. "These files can't be opened. Your internet security settings prevented one or more files from being opened" P.S. - I'm not looking for enabling the "Run" command to show in the Start Menu Any help would be much appreciated

    Read the article

  • How to enable CDR on AsteriskNow 1.5

    - by Michal Niklas
    I have upgraded PBX to Asterisk 1.6.2.7 and now CDR files are not created. It looks that such logging is disabled: Connected to Asterisk 1.6.2.7 currently running on pbx2 (pid = 5824) Verbosity is at least 3 pbx2*CLI> cdr show status pbx2*CLI> Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple Asterisk shows that CDR modules are loaded: pbx2*CLI> module show like cd Module Description Use Count cdr_manager.so Asterisk Manager Interface CDR Backend 0 cdr_csv.so Comma Separated Values CDR Backend 0 app_cdr.so Tell Asterisk to not maintain a CDR for 0 app_forkcdr.so Fork The CDR into 2 separate entities 0 func_cdr.so Call Detail Record (CDR) dialplan functi 0 cdr_custom.so Customizable Comma Separated Values CDR 0 6 modules loaded How to enable creating CDR csv files?

    Read the article

  • IIS permissions issue pointing docroot to Samba share

    - by lalalalalalalambda
    I have an IIS project which is stored on a Samba shared, network mounted with the following line: X: \\my-samba-server\dev /user:freddie Connectivity is fine, can read/write files from X:. In IIS, I'm trying to set it as the Physical path via \\my-samba-server\dev\folder\to\my\files, which results in the following 500.19 error: Config Error | Cannot read configuration file due to insufficient permissions It is by default trying to use the Pass-through authentication. If I try to set this to connect as the specific user freddie, I receive: The specified user does not exist What is the correct way to connect to a path which has been setup as described above? *Samba man pages indicate version 3.6 is on the Debian host

    Read the article

  • How to set the IP address in a customized OpenWRT compilation

    - by Berdus
    I have been struggling today customizing OpenWRT. I checkout the stable using SVN, "make menuconfig" to customize the image, "make" it and run it on a router. Almost all my modifications work, except for the (Seemingly trivial) task of changing the default 192.168.1.1 address. I tried numerous files (scripts as well as config files) but I can't seem to change it (I can change it for a brief moment after boot using the "preinit" file, but after a few seconds it reverts to default). I suspect I should be setting it in the /etc/network file, but modifications there seem to be overwritten during boot. Maybe it has something to do with the br-lan interface? Does anybody have some thoughts on the subject? Thanks!

    Read the article

  • machine crash when connecting to external harddisk

    - by Gnot
    i recently had a problem with my laptop. when i booted up the machine, i would get a SMART failure error message and when i pressed F1 to continue, it would take a very long time to boot and it would come back to the same error message again. thinking that my hard disk was dying, i bought a new hard disk and installed on my laptop and so now my laptop is alright. however i need to recover data from that old hard disk, so i bought an external hard disk case and placed the old hard disk onto the case and connected to my laptop with USB. the first few times when i connected, i could see the files from the old hard disk and managed to copy some files over although it took extremely long to transfer. but now whenever i connect to the old hard disk, after a few minutes, my laptop will crash and re-boot. do you think my old hard disk is dead beyond repair? or you can offer some help here? any assistance would be appreciated!

    Read the article

  • lamp server permissions on development server

    - by user101289
    I run a LAMP server on a ubuntu laptop I use only for development. I am not greatly concerned with security, since the server is never accessible outside the local network, and it's turned off when I'm not using it. My question is what is the simplest and 'best' way to set permissions/users/groups so that when my myself user creates, edits or writes files in the webroot, I won't need to go through and CHMOD / CHOWN everything back to the www-data user? Should I add myself to the www-data group? Or chown the webroot to www-data:myself? Or is there a best practice for this situation so I don't have to keep re-setting the ownership of these files? Thanks

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How to exclude a sub-folder from HTaccess RewriteRule

    - by amb9800
    I have WordPress installed in my root directory, for which a RewriteRule is in place. I need to password-protect a subfolder ("blue"), so I set the htaccess in that folder as such. Problem is that the root htaccess RewriteRule is applying to "blue" and thus I get a 404 in the main WordPress site (instead of opening the password dialog for the subfolder). Here's the root htaccess: RewriteEngine on <Files 403.shtml> order allow,deny allow from all </Files> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> I tried inserting this as the second line, to no avail: RewriteRule ^(blue)($|/) - [L] Also tried inserting this before the index.php RewriteRule: RewriteCond %{REQUEST_URI} !^/blue/ That didn't work either. Also inserted this into the subfolder's htaccess, which didn't work either: <IfModule mod_rewrite.c> RewriteEngine off </IfModule> Any ideas?

    Read the article

  • How to forbid programs from accessing a folder

    - by prashanth
    I use firefox as my default browser. IE and chrome are used as secondary browsers. Apart from these, i need to have a alternate browser. I tried opera, but when installing it, opera automatically imported all my bookmarks and other profile data from firefox folder. I really had a hard time safely deleting these files that opera made a copy, without harming the original files which firefox uses. I don't want this to happen. Here's the situation: Now, i want to install other browser or re install opera. How do i make the firefox profile folder to be accessed only by firefox.exe and me(the administrator)? Can this be done via security tab under folder properties?

    Read the article

  • Deploying Office 2013 via GPO

    - by NickC
    Looking at potential ways to deploy Office 2013 via GPO. First and most obvious way is to run a startup script which calls the Office 2013 setup.exe. Problem here is what happens after it is installed, will that startup script keep re-installing the product every time the machine boots? Another potential way is to install each Office component separately using the multitude of .msi files which are present, would that work and provide the same thing as a full install of Office? There is actually twenty three separate .msi files. What about officemui.msi is that a wrapper which contains calls to all of the other office components.

    Read the article

  • How do I configure VMware View location-based printing to use Active Directory Groups?

    - by Jason Pearce
    I am attempting to configure VMware View 4.5's Location-Based Printing, which leverages an included OEM version of ThinPrint, to assign printers to active directory groups. The location-based printing feature maps printers that are physically near client systems to VMware View desktops. I am using the Active Directory group policy setting AutoConnect Location-based Printing for VMware View, which is located in the Microsoft Group Policy Object Editor in the Software Settings folder under Computer Configuration. The AutoConnect Location-based Printing for VMware View appearst to be just a name translation table. It permits me to assign a specific printer or printers to an IP Range, Client Name, Mac Address, User, or User Group. I'm attempting to assign printers to active directory user groups. I have created a new active directory group for each printer that I intend to use in VMware View desktop pools. I will then assign active directory users to the active directory groups that represent each network printer. Example: doej is a member of the PTR-FLOOR2-NORTH-ROOM255 active directory group. Using AutoConnect, I assigned the group to receive a network printer by adding PTR-FLOOR2-NORTH-ROOM255 in the User/Group column. Problem: When doej logs in to his VDI session, the printer is not present. However, if I use a wildcard "*" in the User/Group column instead of the specific PTR-FLOOR2-NORTH-ROOM255 active directory group, the printer is present and functions as designed. Alternatives: I have tried assigning printers to active directory groups within AutoConnect in the following ways, all unsuccesfull: PTR-FLOOR2-NORTH-ROOM255 domainexample\PTR-FLOOR2-NORTH-ROOM255 domainexample.local\PTR-FLOOR2-NORTH-ROOM255 Confirmation: The information used to map the printer to the VMware View desktop is stored in a registry entry on the View desktop in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\thinprint\tpautoconnect. For each of these examples, I have reviewed the registry entry and can confirm that the desktop is receiving the information from the AutoConnect translation table. Summary: Can anyone provide an example of how to configure VMware View 4.5's Location-Based Printing so that I may assign network printers to active directory groups via the included AutoConnect tool? I would welcome a clear example of a working configuration. Thank you.

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • How to uninstall AMD Display Driver from other install of Windows

    - by nar
    So my Windows 7 had a BSOD from an Ati driver file then restarting only gives me some wierd coloured lines at the top of the screen. Starting in safe mode causes the boot up to freeze at AtiPcie.sys and overlays the same coloured lines on the safe mode start up display. It seems to be a problem with the display driver but I can't get into the OS at all to uninstall it. Luckily I have an old install of Windows 7 from when I moved to a SSD still on my HDD and that runs just fine so I'm pretty sure its a software problem (Maybe the SSD corrupted a driver file or something). So does anyone know how I can uninstall the driver files from another Windows 7 install on the same PC? EDIT: What I mean is, I'm running on the broken PC right now but using a different HDD's windows install. I have full access to the not booting up OS drive with the Windows so I can delete any files/load the registry etc...

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • rkhunter warns of inode change by no file modification date changes

    - by Nicholas Tolley Cottrell
    I have several systems running Centos 6 with rkhunter installed. I have a daily cron running rkhunter and reporting back via email. I very often get reports like: ---------------------- Start Rootkit Hunter Scan ---------------------- Warning: The file properties have changed: File: /sbin/fsck Current inode: 6029384 Stored inode: 6029326 Warning: The file properties have changed: File: /sbin/ip Current inode: 6029506 Stored inode: 6029343 Warning: The file properties have changed: File: /sbin/nologin Current inode: 6029443 Stored inode: 6029531 Warning: The file properties have changed: File: /bin/dmesg Current inode: 13369362 Stored inode: 13369366 From what I understand, rkhunter will usually report a changed hash and/or modification date on the scanned files to, so this leads me to think that there is no real change. My question: is there some other activity on the machine that could make the inode change (running ext4) or is this really yum making regular (~ once a week) changes to these files as part of normal security updates?

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • How to change Content-Transfer-Encoding for email attachments?

    - by user1837811
    I have signed up for http://eudyptula-challenge.org/ challenge and this accepts email attachments only in simple text format. The files without extension (and also .zip file) are transferred with base64 encoding. I want to send email attachment in simple text. This is how my makefile is transferred :- Content-Type: text/plain; charset=UTF-8; name="Makefile" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="Makefile" b2JqLW0gKz0gaGVsbG8yLm8KCmFsbDoKCW1ha2UgLUMgL2xpYi9tb2R1bGVzLyQoc2hlbGwg dW5hbWUgLXIpL2J1aWxkIE09JChQV0QpIG1vZHVsZXMKCmNsZWFuOgoJbWFrZSAtQyAvbGli L21vZHVsZXMvJChzaGVsbCB1bmFtZSAtcikvYnVpbGQgTT0kKFBXRCkgY2xlYW4KCg== How to configure Thunderbird to send a attachment files in as simple text?? Or I should any other email server or tool to send email attachments in simple text??

    Read the article

< Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >