Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 275/439 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • What type of security problems are mitigated by this .NET architecture?

    - by Jonno
    Given the following physical layout for a .NET web application: DB (sql server, windows) - No public route (no table access, only stored procs) Web Service DAL (iis, windows) - No public route (can be accessed by web server via port 80 and 443) Web Server (iis, windows) - Public route (only via port 80 and 443) What type(s) / examples of attack could be used to compromise the public web server but would be blocked by the Web Service DAL? i.e. can you think of concrete attack types that the DAL stops? Please note, I am interested only in the security aspect, not scaling / fault tolerance / performance / etc. In my mind if the web server has been compromised using an attack over port 80/443, then the same attack would work over port 80/443 to the Web Service DAL box.

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • Installing Oracle11gr2 on redhat linux

    - by KItis
    I have basic question about installing applications on linux operating system. i am going to express my issue considering oracle db installation as a example. when installing oracle database , i created a user group called dba and and user in this group called ora112. so this users is allowed to install database. so my question is if ora112 uses umaks is set to 077, then no other uses will be able to configure oracle database. why do we need to follow this practice. is it a accepted procedure in application installation on Linux. please share your experience with me. thanks in advance for looking into this issue say i install java application on this way. then no other application which belongs to different user account won't be able use java running on this computer because of this access restriction.

    Read the article

  • TS connection lost but not local

    - by Owl
    I have an office that is connected to a shared db server and terminal services server, that other offices use as well, through a vpn tunnel. For some reason all workstations in the office will lose connectivity to the remote address but will remain connected to their local internet. I have checked both firewalls to ensure all settings match accordingly. They seem to go down at routine times every day and it doesn't last longer than a minute or two. Any ideas of what this may be? OS: Windows Server 2003R2 Terminal Services 5.2 Symantec Gateway 320 & Symantec Firewall/VPN 100

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • Can not join comp to the domain... greyed out

    - by Logman
    I have an old WinXP Pro SP3 computer I need to join to the domain, simple right? not really. When I go to control panel - system - computer name and click on CHANGE ("rename this computer") everything is greyed out. I can not set it from workgroup to a domain. I am logged on locally as an admin. (Builtin account and one I created) I have checked local policy (gpedit.msc) on the comp, but it feels like a needle in the haystack. I could probably reload an image faster than trying to fix this...but I am curious so I post here to see if anyone knows of it/fix. I tried reseting the policy to defaults, but no luck: secedit /configure /cfg %windir%\repair\secsetup.inf /db secsetup.sdb /verbose EDIT:

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • Restore Picasa people tags

    - by Paul
    I have loaded Windows 7 to my laptop. Before doing this I backed up all my pictures using the Picasa backup utility. I then ran a restore on the clean Windows 7 install. I then installed Picasa 3.5 and none of the people tags showed up. I then went and deleted what I thought was the Picasa DB and then tried running the restore again. Now each folder shows up twice in Picasa but only once under the Windows Pictures folder. How do I get rid of the duplicates in Picasa and get my people tags back?

    Read the article

  • Connect to MySQL trough command line without need root password

    - by ReynierPM
    I'm building a Bash script for some tasks. One of those tasks is create a MySQL DB from within the same bash script. What I'm doing right now is creating two vars: one for store user name and the other for store password. This is the relevant part of my script: MYSQL_USER=root MYSQL_PASS=mypass_goes_here touch /tmp/$PROY.sql && echo "CREATE DATABASE $DB_NAME;" > /tmp/script.sql mysql --user=$MYSQL_USER --password="$MYSQL_PASS" < /tmp/script.sql rm -rf /tmp/script.sql But always get a error saying access denied for user root with NO PASSWORD, what I'm doing wrong? I need to do the same for PostgreSQL, any help? Regards and thanks in advance

    Read the article

  • Can expire_logs_days be less than 1 day in MySQL?

    - by Scott
    So... yesterday I received an "after the fact email" about a campaign that has started for one of the services that I run. Now the DB server is getting hammered, hard, to the tune of about 300mb/min in binary logging for the replicate. As you could imagine, this is chewing up space at a fairly tremendous rate. My normal 7 day expiry of binary logs just isn't cutting it. I've resorted to truncating logs to just the last for 4 hours with(I'm verifying that replication is up to date with mk-heartbeat): PURGE MASTER LOGS BEFORE DATE_SUB( NOW(), INTERVAL 4 HOUR); I'm just running that from cron every few hours to weather the storm, but it made me question the minimum value for expire_logs_days. I haven't come across a value that is less than 1, but that doesn't mean that it isn't possible. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days gives the type as being numeric, but doesn't indicate if it's expecting integers.

    Read the article

  • Domain name in server other than the website

    - by med
    Hello! I'm not very popular with hosting, but I have a special situation: I live in Tunisia, I can buy a domain name with .tn extention, but the problem is that: 1- The domain could not be pointed to a server outside Tunisia 2- All servers in Tunisia are bad, no one provides really reliable hosting so, I want to use the .tn domain with a basic hosting in Tunisia, and Make other db queries and rich media on another remote server outside Tunisia. How to do it? Is there better alternatives? All suggestions are welcome :) Thank you.

    Read the article

  • How to migrate Lotus Notes Mail in database to Exchange public folder?

    - by elsni
    I need to migrate a Lotus Notes Mail database to Exchange. In the past, Users get mails in their Notes mail accounts and sort them manually by drag and drop in a specific folder structure in a seperate Notes mail database. This should be mapped to Exchange. I considered using outlook and a sharepoint Library mapped to outlook, but Outlook does not support drag and drop from the mail account to a standard doclib and the discussion libs do not support folders. So I think it's the easiest way to use an Exchange public folder instead of a sharepoint lib, which should work as in Notes (correct me if I'm wrong). But how I do migrate the old Notes db to the public folder including all subfolders? Thank you!

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • Apache httpd + FreeTDS hangs until restarted

    - by Jordan Reiter
    Every so often requests to a Linux server (say, linux.example.org) where the web app (Django) pulls in data from a SQL Server database via FreeTDS will hang. Requests on other servers pointing to the database still work, as do requests on linux.example.org that use local MySQL databases. Only the server plus FreeTDS appear to be affected. Restarting httpd makes the database connections work correctly again. What could cause this problem? Using: Centos 5.9 freetds 0.91 Apache httpd 2.2.3 /etc/obdc.ini: [DSN] Description = SQL Server 2005 Driver = FreeTDS ;Database = dbname Servername = SERVERNAME ;TDS_Version = 8.0 /etc/freetds.conf: [SERVERNAME] driver = /usr/lib64/libtdsodbc.so host = db.example.org port = 1433 tds version = 8.0 client charset = UTF-8

    Read the article

  • Fixing Broken Groups

    - by themaestro
    Hey, I just got onto a new project with the student government at my University and we're trying to get our webserver into a more workable state. The current problem is that all of us for some reason have sudo power on the server, but we can't write/create files anywhere on the server (as far as we can tell) currently. Our groups are currently as follows: /srv/ice/db$ groups goshri sshamim rmenezes goshri : goshri sshamim : sshamim ptx rmenezes : rmenezes ptx daifotis : daifotis ptx We added a few of us to ptx because we thought that might give us write access but it didn't. We have a bunch of webapps running on this server but since it's university things change hands quickly. What can we do to give us read access?

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • mass deploy Oracle patches with OEM with different OS's

    - by bobsmith12
    I have not been able to get this to work. We are running our OEM grid control database/oms on Red Hat 5(32 bit), but our databases are on Solaris x86-64. I could not mass deploy agents since the Operating Systems were different. When I download patches it is by OS. Is there a way to mass deploy to multiple operating systems? I have alot of databases. I was given the redhat server for OEM because it was available. We are have 10.1,10.2, and 11.1 databases. OEM DB is 10.2.0.5

    Read the article

  • Rsync fails for files that start with underscore when destination is zfs

    - by Eric
    everyone. I'm using rsync3.1.0pre1 on Mac OS X 10.8.5, and am trying to rsync one folder to another. The destination is a ZFS volume mounted via SMB. The problem I'm having is that files that start with underscore (e.g., '_filename.jpg') are not being successfully synced to the destination. I get the following error message: rsync: mkstemp "/path/to/destination/._filename.jpg.NUgYJw" failed: Permission denied (13) In this case, '_filename.jpg' does not make it to the destination. I understand that rsync creates hidden, temporary files at the destination which are preceded with '.' and have a random file extension appended on the end. But the original filename starts with '', not '.', and I haven't asked rsync to copy extended attributes / resource forks over (unless it always does it). The rsync command I'm using is: rsync -avE --exclude='.DS_Store' --exclude '.Trash' --exclude 'Thumbs.db' --exclude '._*' --delete /source/ /destination/ Has anyone found a way around this problem? Thank you!

    Read the article

  • MariaDB, Galera, xtrabackup - do I need the binary log?

    - by bernhardrusch
    We are using a MariaDB Galera Cluster with 3 nodes. For the state transfer we are using xtrabackup. We have some problems with the binary logs - they got too big and crashed the server. We can remove them manually with the purge binary logs command, another way would be to set the expire_logs_days so they would expire. I now that we could use xtrabackup to backup the DB and use the binlog to get to some point in time. But do we really need it for Galera to work ?

    Read the article

  • mysql 5.1 - innodb - query_cache_size - 9,418,108 queries have been removed from the query cache due to lack of memory

    - by Tom C
    Currently running on a 16GB system - Ubuntu 64 bit. INnodb Buffer Pool is set to 10GB. tuning-primer shows the following: QUERY CACHE Query cache is enabled Current query_cache_size = 512 M Current query_cache_used = 501 M Current query_cache_limit = 4 M Current Query cache Memory fill ratio = 97.87 % Current query_cache_min_res_unit = 4 K However, 9418108 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size That is over 9million queries removed. System uptime is 8 days. Should I remove the Query Cache altogether? Our db is always under heavy I/O. tia

    Read the article

  • Mongo Client RedHat EL5 UT8 Support

    - by Michael Irey
    # mongo MongoDB shell version: 1.6.4 Fri Mar 16 11:55:46 *** warning: spider monkey build without utf8 support. consider rebuilding with utf8 support connecting to: test Mongo Server seems to handle the utf8 characters fine, as well as my php-mongo-client driver. But when I try to query a record that has a utf8 character from the mongo command line client I get: > db.Users.find({age:33}); error:non ascii character detected Fri Mar 16 11:55:43 mongo got signal 11 (Segmentation fault), stack trace: Fri Mar 16 11:55:43 0x440b50 0x3664c302d0 0x3f47e7b6e0 0x3f47e83bbd 0x3f47e254f3 0x3f47e25660 0x3f47e256ee 0x3f47e25792 0x3f47e2876e 0x4b031d 0x443b72 0x445476 0x3664c1d994 0x43fd39 mongo(_Z12quitAbruptlyi+0x3b0) [0x440b50] /lib64/libc.so.6 [0x3664c302d0] /usr/lib64/libjs.so.1 [0x3f47e7b6e0] /usr/lib64/libjs.so.1(js_CompileTokenStream+0x3d) [0x3f47e83bbd] /usr/lib64/libjs.so.1 [0x3f47e254f3] /usr/lib64/libjs.so.1(JS_CompileUCScriptForPrincipals+0x60) [0x3f47e25660] /usr/lib64/libjs.so.1(JS_EvaluateUCScriptForPrincipals+0x3e) [0x3f47e256ee] /usr/lib64/libjs.so.1(JS_EvaluateUCScript+0x22) [0x3f47e25792] /usr/lib64/libjs.so.1(JS_EvaluateScript+0x6e) [0x3f47e2876e] mongo(_ZN5mongo7SMScope4execERKSsS2_bbbi+0xed) [0x4b031d] mongo(_Z5_mainiPPc+0x14a2) [0x443b72] mongo(main+0x26) [0x445476] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3664c1d994] mongo(__gxx_personality_v0+0x269) [0x43fd39] Any ideas or suggestions would be welcome

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >