Search Results

Search found 14142 results on 566 pages for 'mysql workbench'.

Page 479/566 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Renting linux server just to make backups of my personnal data ?

    - by Matthieu
    Hi all, I would like to be able to backup ALL my computers data on a Linux server. For now, I have a home server, but soon I will be travelling, without home (so no home server). I was thinking of renting a dedicated linux webserver, but this is expensive, and I don't need a fast machine "web-oriented" with mysql server and all, I just need a full SSH access (full control, and then I install my programs). Does "backup servers" exist ? Am I doing it wrong (maybe that is not a good solution) ? Note : I run Mac OS, Windows and Linux, I backup through rsync, I want full control on my backup, not an automated "magic" backup like MobileMe or anything like that. Edit : I need around 500Gb storage

    Read the article

  • Migrate reports from MS Access to OOo Base

    - by John Gardeniers
    I'm currently looking at upgrading our office machines from Office XP to Office 2010. For most users the standard edition is fine but just a few of us use Access. There are only a couple of standalone Access databases but the program is used fairly extensively (mostly by myself) as a front end to MySQL. As the cost different between standard and pro versions of Office 2010 is about $170 (AUD) I'm looking at possible alternatives to Access. I'm no huge fan of Open Office but could be convinced to use it if I can find a way to migrate the many reports we currently have in Access. The data is not a problem. So far I've found nothing to suggest this is even possible/practical but perhaps someone here knows otherwise. I'm also open to suggestions for other alternatives to Access but it must be able to produce flexible reports easily. That is the one real strength of Access in my view. Because of its subjective nature I'm making this community wiki.

    Read the article

  • Backup a linux webserver to windows

    - by shaiss
    I have our websites hosted at a thrid party webserver. I have all the admin access needed. I have a local Win2K3 machine that's using retrospect to backup all the networked machines and server, navicat to backup the mysql dbs locally and on the remote linux webserver. So the only part that remains is incremental backups of the files on the webserver. Anyone have any suggestions on how to do this? rSync with deltacopy? Any others?

    Read the article

  • Post compiled php 5.4 curl installation

    - by user140657
    I recently compiled php 5.4 from source. I have Centos 6. I used this configuration: # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql # make # make install # cp php.ini-dist /usr/local/lib/php.ini I realize now that I do not have cURL installed. I don't know how to install cURL after a compiled installation of php. Using yum install php-curl installs cURL for php 5.3. I tried this already with an apache restart and it did not show up on my phpinfo file. How do I install cURL under these circumstances?

    Read the article

  • How to choose size for a cloud server (rackspace)

    - by Emil
    We're going to test the rackspace cloud next week to see how it's working with our web app. It's a LAMP environment with a lot of MySQL databases. How do I choose the "right" server size? On Rackspace I can choose slices with the memory of 256, 512, 1024, 2048, 4096 etc. Right now we don't have a lot of traffic (approx. 1000 visitors/day) but I thought the whole "cloud" idea was to not be limited and auto scale. Update: What I'm looking for is now a specification of what I need. I know it's too complex. I'm looking for examples, case studies etc. It would be interesting to hear something like "Yes we're serving 10 000 daily requests without spikes on a LAMP stack with only one slice on with 2 GB RAM".

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by Grant Tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS I have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah I know :) Anyways I have followed this LEMP (will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what I want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so I can host multiple websites - mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what I need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)?

    Read the article

  • Sphinx 2.0.8 with Postgresql 9.2.4

    - by Calvin
    I want to install Sphinx 2.0.8 from source on CentOS 5.6 with PostgreSQL 9.2.4 my server type : Linux localhost.localdomain 2.6.18-348.6.1.el5 #1 SMP Tue May 21 15:29:55 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux. First, i compile with : ./configure --prefix=/usr/local/sphinx --with-pgsql --without-mysql --with-pgsql-libs=/var/lib/pgsql/9.2/data/ --with-pgsql-includes=/usr/pgsql-9.2/include/ then it seems work, but after i run make errors appeared /usr/bin/ld: cannot find -lpq I have installed postgresql92-devel and libs also libpqxxx and worked with that error all day long but i am not solve that yet. Thanks for helping me.

    Read the article

  • How can I setup my local Nginx server so I can edit the files?

    - by Shane Grant
    I have my local development machine running Arch Linux, Nginx, PHP-FPM and MySQL. In order for the websites I am working on to run the files need to be owned by the http user. The files are currently located in folders like this: /srv/http/site1/ /srv/http/site2/ When I use the following chown command on the http folder the sites work fine, but I cannot edit the files with my user: chown -R http.users /srv/http When I do this the sites do not work, but I can edit the files: chown -R shane.http /srv/http How can I make it so that my user can edit the files, and the web server can run them at the same time? Thank you

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • Connect to RDS inside a VPC using Opsworks located in another VPC

    - by Consuelo Merino
    I have a RDS instance (mysql) inside a VPC called vpc-a (10.0.0.0/16). This instance is private, it can only be accessed from vpc-a. We created a stack on opsworks inside another VPC called vpc-b (10.1.0.0). We want to connect opsworks to the RDS but it doesn't work. It refuses to connect. I tried adding said subnet to the RDS security group. Also read a lot of documentation but I haven't stumbled across the answer. Any help would be greatly appreciated.

    Read the article

  • How can i use my local mysqldump through ssh tunnel

    - by Matthias Kleine
    I would like to dump a mysql-database with mysqldump. But: This command isn't installed on the remote server. It is possible to use my local mysqldump command, to connect via an ssh tunnel to achieve this? I found several solutions, but each one required the mysqldump command directly on the server. When I use Sequel Pro (an Mac OS X App), I can perform an export via an SSH tunnel. But this is not the fastest solution and cannot be used on a unix server...

    Read the article

  • How to document linux server configuration?

    - by Margaret Thorpe
    Hi, I have about 20 linux servers which I need to document the configuration of. I do not mean the detailed configuration of services, but rather user accounts, databases, databases accounts, ip addresses, physical location, SSH port etc. etc. I know all this data is stored in config files, but I want to centralize it all. I am considering just creating a spreadsheet to record this data, but was wondering if there is something better (perhaps a small php/mysql app) which would be more structured and complete than a hacked together spreadsheet. What do you use?

    Read the article

  • getaddrinfo: command not found

    - by jebbie
    I've installed a new Ubuntu 12.04 on an AWS EC2 instance and everything worked fine till now. I followed the instructions in this great tutorial: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ Now i'm on the point "installing monit" and when i restart the service i get this error message now: monit: Cannot translate '(none)' to FQDN name -- Name or service not known I started googling and someone is writing there, that monit uses getaddrinfo in his startup-process to determine the hostname. Ok, so i thought i try out on myself what is getaddrinfo delivering, and then i got: getaddrinfo: command not found I guess, something is missing on my system. Can anyone help?

    Read the article

  • Hosted application, DNS server setup?

    - by Ward Loockx
    Currently I'm allowing users to have an hosted application. Currently they have to point A-records to our servers (sometimes this is to hard or get's messy). I've seen other players using 2 dns servers, so that the user only needs to change these. I'm willing to implement this, but a lot of questions come up. What should I use for this? Can I use bind? The records need to be generated from a mysql database What type of servers do I need? Is a DNS server taking a lot of load? Currently having around 80K daily visitors. Thanks!

    Read the article

  • Courier MYSQL_QUOTA_FIELD isn't working

    - by JoeCoder
    In /etc/courier/authmysqlrc, I have MYSQL_QUOTA_FIELD CONCAT(quota, 'S') But connecting to the account via RoundCube or Thunderbird with the Display Quota plugin shows an unlimited/unknown quota. In the quota field, I have entered 1000000, and this table/row is otherwise working perfectly for authentication. I enabled mysql logging and checked the query log for the query that courier is executing for auth. When I execute it myself it works fine and correctly returns "1000000s" for the quota. I'm using Ubuntu Server 12.04. Any ideas?

    Read the article

  • Virtual firewall to protect hypervisor

    - by manutenfruits
    I am running an Ubuntu Server 12.10 as a single host connected to a NATed router connected using PPPoE to a optical fiber modem. This server is meant to be accessed from the Internet, but also to be used from the LAN as a SVN, MySQL and what not... The issue is that the router is not customizable enough to serve, so I was thinking about creating a virtual pfSense firewall using KVM inside of the server itself, removing the need of the router. Is this possible? Can the host ignore and block all traffic coming to itself, but not for the firewall? I am aware this is not the most desirable environment, I accept suggestions based on budget!

    Read the article

  • Automatic Site Creation

    - by Eddy Freeman
    I have created a platform that i want users to sign up and a new site is created for them instantly. Users will register and afterwards they will receive a username and password for their personal site. The system will work like ecommerce platforms such as www.shopify.com, www.bigcommerce.com, etc.. where users sign up and a new web-shop is created for them instantly. I have been searching for a while how i can create a script to automate this task but couldn't find any tutorials. Am using LAMP (Linux, Apache, MySql and PHP). Am asking if someone can guide me how to write such script or maybe point me to a tutorial or a book or probably similar scripts to automate this task. Sorry if this place is wrong for this question. thanks for your help.

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • which server is best for me??

    - by mathew
    I am looking for a best hosting service for my website. My website is a PHP MySQl driven site which hase got site scrapping for more than 10 websites and near about 8-10 API pursing and some about 150 mb dat file reading(from local hard drive), and also one rss pursing,Live graph from other sites, geo map from Google,Map api from Google and so on. one widget which real-time result for any one who chose the same. so my question is which is best option for me. actually I am considering softlayer cloud as they are pretty cheap and more facilities than rackspace. another option is a dedicated server which has 2 single core processor, 4GB ram and 250GB sata II. with 100 mbps uplink. so please tell me which will be the best option?? I heard that dedicated server has lots of limitations than cloud. but for cloud they uses SAN for storage so I am afraid the reading proces for database may be bit slower...and their basic plan ram is only 1 GB.

    Read the article

  • Can I make the Courier email server use a non-default salt for passwords?

    - by Vasiliy Stavenko
    I'm setting up email server for the first time and confused with strange thing. I have several user accounts which stored in previous server. Passwords for this accounts are in plain text. But I want to create crypts for them. MySQL (where my users will be stored) have function encrypt(passwd, salt). If no salt given used random value. I discovered that Courier uses one certain salt and crypted all passwords with it. So the task done. But I'd like to know if there's a way to define my own salt for my pop3 server?

    Read the article

  • Can I make the Courier email server use a non-default salt for passwords?

    - by Vasiliy Stavenko
    I'm setting up email server for the first time and confused with strange thing. I have several user accounts which stored in previous server. Passwords for this accounts are in plain text. But I want to create crypts for them. MySQL (where my users will be stored) have function encrypt(passwd, salt). If no salt given used random value. I discovered that Courier uses one certain salt and crypted all passwords with it. So the task done. But I'd like to know if there's a way to define my own salt for my pop3 server?

    Read the article

  • Open source system for swipe card access?

    - by Moduspwnens
    We're looking at replacing our campus-wide magnetic swipe card system with something more robust. The "programmer" side of me says there's got to be an open-source, scalable solution that already does this, but all I've been able to find are proprietary vendor-specific solutions. Ideally, it'd have the following: Based on some open standard that allows us to select from a wide selection of card readers (like IMAP or HTTP) Support different kinds of card access (magnetic strip, RFIDs, etc.) Future-proof (to the extent possible) The lack of information I'm finding leads me to believe I'm not searching for the right things... or such a solution doesn't exist. Is there not some basic, open-source solution to this (like MySQL for databases, or Moodle for an LMS, or Apache for a web server)?

    Read the article

  • Export and import a PostgreSQL database with a different name?

    - by J. Pablo Fernández
    Is there a way to export a PostgreSQL database and later import it with another name? I'm using PostgreSQL with Rails and I often export the data from production, where the database is called blah_production and import it on development or staging with names blah_development and blah_staging. On MySQL this is trivial as the export doesn't have the database anywhere (except a comment maybe), but on PostgreSQL it seems to be impossible. Is it impossible? I've seen out there some people using sed scripts to modify the dump. I'd like to avoid that solution but if there are no alternative I'll take it. Has anybody wrote a script to alter the dump's database name ensure no data is ever altered?

    Read the article

  • Dynamic procmail filters

    - by WombaT
    i need procmail to place incoming mail into specific folder depending on some set of rules. I know how i can accomplish this, but i need to write static set of rules in a specific file. What i really need is to configure procmail to use rules stored in mysql database. How i can do this? I've read a bit about that and one solution i found is to pipe message to a php/perl script and return a folder name to place message. But i have completely no i idea how to use php script as a rule and then use its return value.

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >