Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 683/1983 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Utility Objects Series Introduction (but mostly a bit of an update)

    - by drsql
    So, I have been away from blogging about technical stuff for a  long time,  (I haven’t blogged at all since my resolutions blog , and even my Simple Talk “commentary” blog hasn’t had an entry since December!)  Most of this has been due to finishing up my database design book , which I will blog about at least one more time after it ships next month, but now it is time to get back to it certainly in a bit more regularly. For SQL Rally, I have two sessions, a precon on Database Design,...(read more)

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • Send as another user / email address in Exchange 2010

    - by adamo
    In my setup there exist user1, ..., [email protected]. Mail for example.com is handled by Exchange 2010 and all the users user Outlook 2010. There also exists a Standard Distribution list named [email protected]. Is it possible to have some of the users being able to send email with [email protected] as the sender address? Can the sender's GECOS be different too when this happens so that the recipient sees "Offices of Example.com" instead of "User Name X"? Sometimes the secretaries need to send stuff as "the office" and not as theirselves ...

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

  • SQLite with two python processes accessing it: one reading, one writing

    - by BBnyc
    I'm developing a small system with two components: one polls data from an internet resource and translates it into sql data to persist it locally; the second one reads that sql data from the local instance and serves it via json and a restful api. I was originally planning to persist the data with postgresql, but because the application will have a very low-volume of data to store and traffic to serve, I thought that was overkill. Is SQLite up to the job? I love the idea of the small footprint and no need to maintain yet another sql server for this one task, but am concerned about concurrency. It seems that with write ahead logging enabled, concurrently reading and writing a SQLite database can happen without locking either process out of the database. Can a single SQLite instance sustain two concurrent processes accessing it, if only one reads and the other writes? I started writing the code but was wondering if this is a misapplication of SQLite.

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • What equipment do real ISP's use?

    - by Allanrbo
    In a dormitory of 550 residents, people often mistakenly set up DHCP servers for the whole network by plugging in their private Wi-Fi routers wrongly. Also recently, someone mistakenly configured their PC to a static IP being the same as that of the default gateway. We use cheap 3Com switches at the moment. I know that Cisco switches support DHCP snooping to solve the DHCP problem, but that still does not solve the default gateway IP takeover problem. What sort of switch equipment do real ISP's use so their customers cannot break the network for the other customers? What we ended up doing In case anyone are courious, we ended up doing seperate VLANs for each user. And as a matter of fact, not just the 550 users, but for 2500 users (11 dorms). Here's a page describing the setup: http://k-net.dk/technicalsetup/ (the section "Transparent firewall using VLANs"). There was no significant load on the router server as I feared in one of the comments below. Even at 800Mpbs.

    Read the article

  • Apache suEXEC execute script on user dir basis?

    - by Blame
    Iam looking for a way to run a script with different users. I dont want to hardcode the users in the config... and I found some information that it should be possible that the user goes to... lets say: Code: http://localhost/~user1/myscript.cgi and the script gets executes as user 'user1'. Does anybody know if that is possible? If not, do I have to make a new vhost config for every user? Thanks a lot! Greets, Kodak

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • More Collateral to drive Remarketer Activity

    - by martin.morganti(at)oracle.com
    Over the last few months we have added a range of marketing materials to the Remarketer Level pages including an extensive set of Marketing Kits to support the Solution Kits previously available. As part of the continuing program, we have just added some additional training for Remarketers at no cost. In addition to the Oracle 1-Click Technology Products Guided Learning Path which explains about the program and how to position the products, we have added Oracle Database 11g 1-Click Technology Sales Guided Learning Path. This Learning path allows Remarketers to get access to more detailed training on the Oracle 11G database and so develop their understanding of the opportunities they could be benefiting from today, by reselling the 1-click products. This is in direct response to requests to make more training available to the Remarketers, so take the opportunity to let your Remarketer customers and prospects know about this additional training and how they can Jump Start a resell business with Oracle today with No Fees, No Barriers and No Excuses.

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by user10826
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • Why is my vhosts file interfering with my apache deployment?

    - by Avery Chan
    When I enable my vhosts file (i.e. uncomment this line: Include /private/etc/apache2/extra/httpd-vhosts.conf) I am unable to reach localhost. I /am/ able to reach the last virtual host listed in my vhosts file: <VirtualHost *:80> DocumentRoot "/Users/achan/Sites/epwbst" ServerName epwbst </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/achan/Sites/pxproj" ServerName pxproj </VirtualHost> Typing pxproj in my browser brings up the expected web content. But I am unable to reach epwbst or localhost. If I re-comment the vhost line in my httpd.conf, I am able to reach local host (i.e. "It works!") but obviously am unable to reach my virtual hosts. I don't know how to continue troubleshooting this. Why can't I reach localhost when I've got my vhosts turned on? OS: Mac OS X 10.7 Server version: Apache/2.2.21 (Unix)

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • How to increase max FD limit for a daemon process running under a headless user?

    - by Ameliorator
    To increase the FD limit for a daemon process running under a headless user on a Ubuntu Linux machine we did following changes in /etc/security/limits.conf soft nofile 10000 hard nofile 10000 We also added session required pam_limits.so in /etc/pam.d/login. The changes got reflected for all the users who logged out and logged in again. Whatever new processes are starting under those users are getting new FD limits. But for the daemon which is running under a headless user the changes are not getting reflected. what is the way by which the changes can be reflected for the daemon which is running under headless user ?

    Read the article

  • Using NOPASSWD for specific commands in sudoers file, PASSWD for all others

    - by jberryman
    I would like to configure sudo such that users can run some specific commands without entering a password (for convenience) and can run all other commands by entering a password. This is what I have, but this does not work; a password is always required: Defaults env_reset Defaults timestamp_timeout = 1 root ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) NOPASSWD: /usr/sbin/pm-suspend, /usr/bin/apt-get, PASSWD: ALL #includedir /etc/sudoers.d Note that this is a debian system which uses this adding users to the "sudo" group method. Thanks.

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • Write to windows share

    - by aidan
    I used to mount a windows share in Ubuntu server, with an entry in fstab: //data/SharedFolder /media/SharedFolder/ smbfs user,defaults,credentials=/root/.creds,uid=root,gid=root 0 0 /root/.creds is a text file with three lines, my username, password and domain. Users on the ubuntu server could write to this mount, but then I upgraded to 10.04 and now only root can write. Regular users can still read though. mount currently tells me: //data/SharedFolder on /media/SharedFolder type cifs (rw,mand,noexec,nosuid,nodev) How do I make it world writeable again? Thanks

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by user10826
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • Ubuntu 12.04 MySQL 5.5 MyODBC 5.1 or 3.1 query hangs

    - by jorgearr
    I have been able to install Ubuntu 12.04 with LAMP MySQL version 5.5.x It works fine within linux, it allows me to connect from myodbc windows vista or windows 7 I have configured networking access and have been able to access from windows vista using putty and other tcp connections like mysql query browser. I have also configured or disabled ufw firewall and apparmor. The connection works fine until I query data from the tables. It lets me query small amounts of data like: SELECT name FROM users limit 20 but if I do a SELECT * FROM users, it goes on a never-ending loop. This happens even on tables with very few records like 5 or even less. The problems occur with windows because I tried ssh from linux mint and it worked fine. I need to be able to work using MyODBC either 3.51 or 5.1 since my client program is made in VB6 and connects to mysql server via tcp/ip. The server is an HP PROLIANT ML350G6 with Intel Xeon 64 bits. I tried several ubuntu server version (12.04 64bit, 10.10 64bit, 11.04 32bit) and none has worked I even tried CentOS 6.3 and the same. As a reference, it works fine with onother ubuntu server version 6.x on HP Proliant 150 and mysql 5.0.x that is like 7 years old and never updated. Help Please.

    Read the article

  • No Cost 1-Click Remarketer Level Training

    - by martin.morganti(at)oracle.com
    The Remarketer level has proven to be a great success as a way of enabling Remarketers to Jump start a resale business with Oracle. As part of the Knowledge Zone for the 1-Click Products we have some no cost training available - the Oracle 1-Click Technology Products Guided Learning Path - which explains about the program and how to position Oracle products. We have been working to increase the training that is available for Remarketers and I am pleased to let you know that we have recently added more no cost training. The training path that we have released is the Oracle Database 11g 1-Click Technology Sales Guided Learning Path . This set of courses provides more detail on the Oracle 11G Database and will help you to better uncover and exploit opportunities for you to sell Oracle 11G as part of your solutions. So if you are interested in a No Fees, No Barriers No Excuses way to resell Oracle 1-Click products look at the Remarketer page and take the free 1-Click Guided Learning paths in the Training Section to kick start your activity.

    Read the article

  • Trucrypt or any HDD encryption solution with a bypass?

    - by sorrrydoctorforlove
    Hello experts, in my environment here we have started using trucrypt to encrypt and protect our laptops that are being brought out of the office. The issue comes with the password, we can document the passwords and assign them to users but if they simply use the program to change the password, and then forget it we are in trouble. We backup our data to external locations so it should be fine, but is there any way to install a bypass to be able to boot the laptop or stpo users changing their password (while they have local admin access)? Or should we try another solution? thanks.

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >