Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 649/937 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • zip being too nice (osX)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi this all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • MSSoap 3.0 Error while creating Virtual Directory with SOAPVDIR.CMD

    - by BenjaminPaul
    I am trying to install a web service (written in FoxPro) onto a newly configured server. Part of the installation process was to install MSSoap 3.0 which seems to be successful. The server OS is Microsoft Server 2008 R2 (x64). I am now trying to create a virtual directoty at the command prompt using the SOAPVDIR.CMD script and I am getting the following error: CMD> SOAPVDIR.CMD CREATE CSLRosterService "C:\ROSTERWS" CMD> ERROR (0x80070002): Soap Toolkit 3 Isapi is not correctly registered. Does anyone know how I can correct this or what I am doing wrong?

    Read the article

  • Send an email whenever file is deleted from shared folder in windows 7

    - by azmuhak
    I am running a software on several computers at my workplace and the software can run different audio and video files stored on a shared folder in a central computer. The software runs on windows 7 and every person in my company can add or remove files from the shared folder, but this privilege puts the data at risk. I was thinking of creating an email alert to my self whenever a file is deleted. I have written a windows powershell script for sending me emails from smtp server but how can I hook it up to the event of file or folder deletion in a specific shared folder?

    Read the article

  • Symfony2 on Windows with Apache, PHP and MySQL - app_dev.php will not load

    - by Lewis Bassett
    I am trying to get a Symfony2 standard distribution to work on my Windows 7 laptop. I have installed Apache2 (version 2.2.22), PHP 5.3.10 and MySQL 5.5.22. I have a demo PHP script (php_info() and a database call), and it works fine. I can get the start page (http://localhost/Symfony/web/config.php) to display, but I cannot get http://localhost/Symfony/web/app_dev.php/ to execute. The error returned is Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I can get it to work if I install XAMPP instead, but I don't want to use XAMPP. I want to be able to install and configure the components separately. Why isn't this working? Are there some Apache settings that I am missing?

    Read the article

  • Robocopy Mirror Backup gone awry

    - by Aznfin
    I have created a simple batch file script for running Robocopy. It is set to make a backup of my user account folder to my external hard drive. Here's the parameters for Robocopy: ROBOCOPY "C:\Users\Finnly" "F:\Backups\Finnly (Backup)" /ZB /COPY:DAT /DCOPY:T /MIR /256 /MT:32 /XF *.log *.log* *.dat *.tmp *.temp *.old "ntuser*" "SyncToy*" "UpgKit.txt" ".recently-used.xbel" /XD ".gimp-2.6" ".thumbnails" ".VirtualBox" "AppData" "Application Data" "Adobe" "Camtasia Studio" "Cookies" "CyberLink" "DivX Movies" "DVD Architect Pro 5.0 Projects" "dwhelper" "GTA San Andreas User Files" "Lightroom" "Local Settings" "NetHood" "PrintHood" "Scripts" "temp" "Templates" "The KMPlayer" "Tracing" /R:3 /W:10 /V /TS /FP /ETA /LOG+:F:\Backups\Sync.log /TEE For some reason when I run it, it backs up the files and then it seems to back them up again. The size of my user account directory is 18.3 GB but the backup of it occupies over 30 GB. After reading the contents of the log generated, it is obvious that it's copying files more than once. Why is this happening? I'm running Windows Seven Home Premium 64-bit.

    Read the article

  • ubuntu: sending mail with postfix?

    - by ajsie
    i've got some questions about how it works: so ubuntu server comes with postfix installed. if i want my php script to send a mail to lets say [email protected], how does it work? do i have to specify any ip to another MTA (my ISP's MTA?) in postfix's configuration file? and if someone sends back, will it get to my ip? is it postfix that receives it? or has it to do with fetchmail?

    Read the article

  • How to symlink folders and exclude certain files

    - by Jarrod White
    Hey Guys, I'm not a server guru (unfortunately) but have a decent knowledge of linux & bsd. I'm trying to symlink multiple instances of HLDS (game server) but need to exclude certain folders & config files to achieve this properly. I need to do it this way as HLDS loads many mods automatically, and putting an exception to disable the mods doesnt work for all of them. so basically i want: /home/user/hlds-install (the base install) /home/user/server1 /home/user/server2 etc... and then be able to manually put any configs/mods ive excluded into the server dir's so that each server can be configured individually. Can anyone tell me how to do this, perhaps some sort of bash script so that I can just change the targets to run it each time i want to create a new one. I have quite a number to make so doing the whole thing manually for each one definately isn't an option and im all for working smarter, not harder! Thanks :)

    Read the article

  • How do I get my Lexmark x4650 printer working?

    - by Fallen Dohingy
    I think that my printer stopped working with the switch to gnome 3 or unity. Yes I have tried 32 and 64 bit os's. Here is the driver In order to actually install the driver, you need to extract it and then open up terminal and type sudo and then a space. Then drag the script into the terminal window. Here is what it said in the diver install window: Extracting file: printdriver.te Extracting file: lexmark-08z-series-driver-1.0-1.i386.deb Extracting file: launcher.c Extracting file: launcherfallendohingy@Ubuntu-Inspiron-15R:~$ sudo '/home/fallendohingy/Downloads/lexmark-08z-series-driver-1.0-1.i386.deb.sh' [sudo] password for fallendohingy: Verifying archive integrity... All good. Uncompressing nixstaller.............................................................. Collecting info for this system... Operating system: linux CPU Arch: x86_64 Warning: No installer for "x86_64" found, defaulting to x86... TRACKING IDENT = 170209 cpu speed = 2394 MHz ram size = 3762.69921875 MB hd avail = 74348 MB (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory /usr/lib/gio/modules/libgvfsdbus.so: wrong ELF class: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so Extracting file: lsbrowser Extracting file: lsusbdevice Using dpkg installation ============================= Execute: dpkg -i --force-architecture lexmark-08z-series-driver-1.0-1.i386.deb > /tmp/selfgz17540/pkg/files/dpkg_msgs ============================= ============================= Execute: rm lexmark-08z-series-driver-1.0-1.i386.deb ============================= ============================= Execute: /sbin/udevadm control --reload-rules ============================= Successfully installed the .deb Lexmark drivers.

    Read the article

  • What is the structure of network managers system-connections files?

    - by Oyks Livede
    could anyone list the complete structure of the configuration files, which network manager stores for known networks in /etc/NetworkManager/system-connections for known networks? Sample (filename askUbuntu): [connection] id=askUbuntu uuid=81255b2e-bdf1-4bdb-b6f5-b94ef16550cd type=802-11-wireless [802-11-wireless] ssid=askUbuntu mode=infrastructure mac-address=00:08:CA:E6:76:D8 [ipv6] method=auto [ipv4] method=auto I would like to create some of them by my own using a script. However, before doing so I would like to know every possible option. Furthermore, this structure seems somehow to resemble the information you can get using the dbus for active connections. dbus-send --system --print-reply \ --dest=org.freedesktop.NetworkManager \ "$active_setting_path" \ # /org/freedesktop/NetworkManager/Settings/2 org.freedesktop.NetworkManager.Settings.Connection.GetSettings Will tell you: array [ dict entry( string "802-11-wireless" array [ dict entry( string "ssid" variant array of bytes "askUbuntu" ) dict entry( string "mode" variant string "infrastructure" ) dict entry( string "mac-address" variant array of bytes [ 00 08 ca e6 76 d8 ] ) dict entry( string "seen-bssids" variant array [ string "02:1A:11:F8:C5:64" string "02:1A:11:FD:1F:EA" ] ) ] ) dict entry( string "connection" array [ dict entry( string "id" variant string "askUbuntu" ) dict entry( string "uuid" variant string "81255b2e-bdf1-4bdb-b6f5-b94ef16550cd" ) dict entry( string "timestamp" variant uint64 1383146668 ) dict entry( string "type" variant string "802-11-wireless" ) ] ) dict entry( string "ipv4" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) dict entry( string "ipv6" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) ] I can create new setting files using the dbus (AddSettings() in /org/freedesktop/NetworkManager/Settings) passing this type of input, so explaining me this structure and telling me all possible options will also help. Afaik, this is a Dictionary{String, Dictionary{String, Variant}}. Will there be any difference creating config files directly or using the dbus?

    Read the article

  • Creating Multiple Users on Single PHP-FPM Pool

    - by Vince Kronlein
    Have PHP-FPM/FastCGI up and running on my cPanel/WHM server but I'd like have it allow for multiple users off of a single pool. Getting all vhosts to run off a single pool is simple by adding this to the Apache include editor under Global Post Vhost: <IfModule mod_fastcgi.c> FastCGIExternalServer /usr/local/sbin/php-fpm -host 127.0.0.1:9000 AddHandler php-fastcgi .php Action php-fastcgi /usr/local/sbin/php-fpm.fcgi ScriptAlias /usr/local/spin/php-fpm.fcgi /usr/local/sbin/php-fpm <Directory /usr/local/sbin> Options ExecCGI FollowSymLinks SetHandler fastcgi-script Order allow,deny Allow from all </Directory> </IfModule> But I'd like to find a way to implement php running under the user, but sharing the pool. I manage and control all the domains that run under the pool so I'm not concerned about security of files per account, I just need to make sure all scripting can be executed by the user who owns the files, instead of needing to change file permissions for each account, or having to create tons of vhost include files.

    Read the article

  • Duplicity Errno 2 - no such file or directory

    - by Luma
    Hello, I am trying to setup a script for backing up a linux box to a CIFS share. I manually mounted the CIFS share and created a few test folders - OK I then ran duplicity manually with a rather simple command to begin with to make sure things work and well Not OK on this one :) duplicity /root file:///cifsmountfolder/existingfolder/ results: No signatures found, switching to full backup. Traceback (most recent call last): File "/usr/bin/duplicity", line 463, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 458, in with_tempdir fn() File "/usr/bin/duplicity", line 449, in main full_backup(col_stats) File "/usr/bin/duplicity", line 155, in full_backup bytes_written = write_multivol("full", tarblock_iter, globals.backend) File "/usr/bin/duplicity", line 99, in write_multivol backend.put(tdp, dest_filename) File "/usr/lib/python2.5/site-packages/duplicity/backends.py", line 279, in put target_path.writefileobj(source_path.open("rb")) File "/usr/lib/python2.5/site-packages/duplicity/path.py", line 500, in writefileobj fout = self.open("wb") File "/usr/lib/python2.5/site-packages/duplicity/path.py", line 448, in open else: result = open(self.name, mode) IOError: [Errno 2] No such file or directory: '/cifsmountfolder/existingfolder/duplicity-full.2010-09-18T18:41:43-07:00.vol1.difftar.gpg' any ideas? Thank you. Luc

    Read the article

  • SPF problems with Google Apps

    - by mahle
    I currently have an SPF record with a hostname of @ that is: v=spf1 mx ip4:x.x.x.243/32 include:_spf.google.com include:amazonses.com ~all I also have another record of" spf2.0/pra mx ip4:x.x.x.243/32 include:_spf.google.com include:amazonses.com ~all We have had a lot of email being bounced back because of spam and now when I go to http://www.kitterman.com/spf/validate.html? and check the "Does my domain already have an SPF record? What is it? Is it valid?" it says no spf record exists. However, when I send an email using our Amazon SES script and check the headers it says it passes the SPF test. Is there something I am missing? Do I need to place that text in quotes ""? Any help would be greatly apprecaited.

    Read the article

  • Granting rights to the sa account using osql

    - by Jan Jongboom
    I'm installing sql instances through script, and after creating a certain instance, I cannot get the sa account to be enabled through osql. What I've tried osql -S .\INSTANCENAME -E use master ALTER LOGIN sa ENABLE GO Using SSMS to enable the account (by logging in using Windows Auth., 'New query', and exactly the same query as in 1.) Suggestions in this issue No. 2. is actually working; and the account is enabled instantly. No 1 is not working, not even with the suggestions provided in 3., I have restarted the SQL services after executing the commands in osql. Additional info Windows 2003 Server, Microsoft SQL Server 2005 Enterprise, No password policies apply to the account.

    Read the article

  • Adding a forum to an existing site

    - by Andrew Heath
    I've got a site with ~500 registered members, 300 of which are what you'd call "active". Site data is kept in a MySQL dbase. I'd like to add a myBB forum to the site, but this question applies to any forum really. What I very much want to avoid is requiring my users to register both on the site and on the forum because my userbase is not technically literate and this would confuse a lot of them. However the forum software has its own registration, login, cookie, and password management system which naturally are different from the site's mechanics. I envision the following possibilities: install myBB into the existing database and customize the login code to unify the two systems. This would probably mean changing the site's code to use the myBB system as that would likely be less painful to refactor and wouldn't hurt future myBB upgrade ability. install myBB into separate database and write a bridging script of some sort that auto-registers existing site users with the forum if they elect to participate. Also check new forum registrations against the site's username list to prevent newcomers from taking existing names. run them fully separate and force users to re-register (easiest for ME, but least desirable for them) I would like a suggested course of action from those who have trod this path before... Thank you.

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • Tool to monitor file size, file existence, parse xml, etc

    - by Artur Carvalho
    I'm trying to find some tool that helps me monitor several things. What are some requirements: Shows results on a web page. Checks existence of files/folders Checks sizes of files/folders Can parse xml files Can have several status depending if it's for instance, after 9pm Ping workstations/Servers to ensure they are on or off create daily/weekly/monthly reports (pdf, html, csv) show daily/weekly/monthly scheduled tasks check if specific users are logged in a machine check which users are logged in in a machine I've looked into some solutions but could not find what I wanted. Usually tools like nagios are more focused in servers, and spiceworks is not so specific. At this point I'm using a little powershell script that does several of these items, but before losing more time probably reinventing the wheel, what tools are out there? Thank you in advance.

    Read the article

  • Security question pertaining web application deployment

    - by orokusaki
    I am about to deploy a web application (in a couple months) with the following set-up (perhaps anyways): Ubuntu Lucid Lynx with: IP Tables firewall (white-list style with only 3 ports open) Custom SSH port (like 31847 or something) No "root" SSH access Long, random username (not just "admin" or something) with a long password (65 chars) PostgreSQL which only listens to localhost 256 bit SSL Cert Reverse proxy from NGINX to my application server (UWSGI) Assume that my colo is secure (Physical access isn't my concern for the time being) Application-level security (SQL injection, XSS, Directory Traversal, CSRF, etc) Perhaps IP masquerading (but I don't really understand this yet) Does this sound like a secure setup? I hear about people's web apps getting hacked all the time, and part of me thinks, "maybe they're just neglecting something", but the other part of me thinks, "maybe there's nothing you can do to protect your server, and those things are just measures to make it a little harder for script kiddies to get in". If I told you all of this, gave you my IP address, and told you what ports were available, would it be possible for you to get in (assuming you have a penetration testing tool), or is this really protected well.

    Read the article

  • Fix a tomcat6 error message "/bin/bash already running" when starting tomcat?

    - by Andrew Austin
    I have a Ubuntu 10.04 machine that has tomcat6 on it. When I start tomcat6 with /etc/init.d/tomcat6 start I get * Starting Tomcat servlet engine tomcat6 /bin/bash already running. and the server fails to start. Unfortunately, there is nothing in /var/log/tomcat/catalina.out to help debug the issue. With some cleverly placed echo statements it seems to be the line from /etc/init.d/tomcat6: start-stop-daemon --start -u "$TOMCAT6_USER" -g "$TOMCAT6_GROUP" \ -c "$TOMCAT6_USER" -d "$CATALINA_TMPDIR" \ -x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH" The only thing I've changed in this script is TOMCAT6_USER=root. In servers.xml, the only thing I've changed is <Connector port="80" protocol="HTTP/1.1" from port 8080. I have tried reinstalling the package by first removing everything sudo apt-get --purge remove tomacat6 and then sudo apt-get install tomcat6 but this has not solved the issue. I have also restarted the server multiple times in hopes of some magic. Everything was working until I restarted my server. Any ideas?

    Read the article

  • Permission denied message when starting gfs2

    - by sashang
    Can anyone please explain why I get this permission denied error? I try starting the script and it fails with a permission denied message. So I create a copy of it and run that instead and that works. [root@node2 ~]# /etc/init.d/gfs2 stop Unmounting GFS2 filesystem (/drbd): [ OK ] [root@node2 ~]# /etc/init.d/gfs2 start Mounting GFS2 filesystem (/drbd): gfs_controld join connect error: Permission denied error mounting lockproto lock_dlm [FAILED] [root@node2 ~]# cp /etc/init.d/gfs2 /etc/init.d/gfs2_test [root@node2 ~]# /etc/init.d/gfs2_test start Mounting GFS2 filesystem (/drbd): [ OK ] [root@node2 ~]# [root@node2 ~]# ls -l /etc/init.d/gfs2* -rwxr-xr-x. 1 root root 3365 Jan 15 12:11 /etc/init.d/gfs2 -rwxr-xr-x. 1 root root 3365 Jan 15 12:19 /etc/init.d/gfs2_test [root@node2 ~]#

    Read the article

  • Missing /dev/xconsole causes rsyslog to stop as well as all other services

    - by George Van Tuyl
    We are running Ubuntu-10.04.04LTS in Hyper-V environments. We found that the services ssh http or anything else stopped because the rsyslog daemon had died with the message unable to find the /dev/xconsole file. I fixed it temporarily with the following. FILE=/dev/xconsole if [ -e $FILE ]; then echo "$FILE exists Carry on!" else mknod -m 640 /dev/xconsole c 1 3 chown syslog:adm /dev/xconsole echo "Created $FILE." fi The problem is that I can not get rsyslog daemon to process these 8 lines when I restart the daemon. Also restarting the daemon removes the /dev/xconsole file and we are back to all service stopped. In addressing this problem I have inserted the if--fi lines after the start and restart conditions in the rsyslog script. The problem is I do not get an echo to stdio. Does someone have an idea on how to make the rsyslog report to stdio when it creates the /dev/xconsole device. Thanks George Van Tuyl

    Read the article

  • Backup and restore Subversion user permissions

    - by Earth Engine
    We use svnsync to create fully functional backup servers, and we have a script to do so. However if we wanted to create a new backup server, we have to copy the htpasswd and groups.conf file across (that is not hard) and (after running svnsync) manually assign the user/group to repositories. Also, if we change the assignment in the main server, there is no easy way to apply that change to all backup servers. Since we have 50+ projects and 30+ users this is a boring and error-pond exercise. Are there any tools that can help us to backup and restore those automatically? We are using VisualSVN under Windows, so it is better to have solutions in Windows scripts, not shell scripts.

    Read the article

  • Sending SPAM free mail through my website

    - by Sara
    Hi, I've been battling with this issue for couple of months. I need to send bulk mail (not spam) through my social network to users in situations like newsletters, site invitations (when user imports their address book contacts) I'm using shared hosting and it limits 500 mails per hour. Even though i manage to send mails most of them end up in user's spam box. After researching these are the solutions that i finally came up with. 1) Use Google Apps SMTP (http://www.google.com/apps/intl/en/business/features.html) 2) Move into VPS 3) Use shared hosting with throttle enabled Please advise me on what to choose. Will using Google Apps prevent mail being sent as spam? I can't use other 3rd party SMTP like iContact or Aweber as "invitation sending script" will send emails to thousands of contacts, depending on user's addressbook. Thanks in advance

    Read the article

  • How to find location of installed library

    - by Raven
    Background: I'm trying to build my program but first I need to set up libraries in netbeans. My project is using GLU and therefore I installed libglu-dev. I didn't note location where the libraries were located and now I can't find them.. I've switched to Linux just a few days ago and so far I'm very content with it, however I couldn't google this one out and becoming frustrated.. Is there way to find out where files of package were installed without running installation again? I mean if I got library xxx and installed it some time ago, is there somecommand xxx that will print this info? I've already tried locate, find and whereis commands but either I'm missing something or I just can't do it correctly.. for libglu, locate returns: /usr/share/bug/libglu1-mesa /usr/share/bug/libglu1-mesa/control /usr/share/bug/libglu1-mesa/script /usr/share/doc/libglu1-mesa /usr/share/doc/libglu1-mesa/changelog.Debian.gz /usr/share/doc/libglu1-mesa/copyright /usr/share/lintian/overrides/libglu1-mesa /var/lib/dpkg/info/libglu1-mesa:i386.list /var/lib/dpkg/info/libglu1-mesa:i386.md5sums /var/lib/dpkg/info/libglu1-mesa:i386.postinst /var/lib/dpkg/info/libglu1-mesa:i386.postrm /var/lib/dpkg/info/libglu1-mesa:i386.shlibs Other two commands fail to find anything. Now locate did it's job but I'm sure none of those paths is where the library actually resides (at least everything I was linking so far was in /usr/lib or usr/local/lib). The libglu was introduced just as example, I'm looking for general solution for this problem.

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >