Search Results

Search found 28985 results on 1160 pages for 'sql training'.

Page 901/1160 | < Previous Page | 897 898 899 900 901 902 903 904 905 906 907 908  | Next Page >

  • ?????3:????“??”

    - by Todd Bao
    ?????SQL???????... =D select c from(select * from(select 'oracle' cc, level no from dual connect by level <= length('oracle'))model return updated rowsdimension by (no)measures (cc c, no n)rules (   c[any] = substr(c[cv()],n[cv()],1)))/ ????itpub?????????oracle????????,???????,?????????“???”,????????????????????????????????????=D ??? Todd ????:http://www.itpub.net/forum.php?mod=viewthread&action=printable&tid=1253982 Todd

    Read the article

  • ??????Oracle Developer Tools for Visual Studio(ODT)???

    - by Yusuke.Yamamoto
    ????? ??:2010/11/12 ??:???? Oracle ??Oracle Developer Tools for Visual Studio(???ODT)????? Visual Studio ??????????????????ODT ????????????????????ODT ????????????SQL*Plus ???????????????????????·????????????Oracle??????·????????????????? ?????????????ODT ???????ODT ???????Visual Studio ?? ODT ???Visual Studio ?? Oracle??????·????????????????? ????????? ????????????????? http://codezine.jp/article/detail/5499

    Read the article

  • Cannot create a project TFS 2012 - TF218027

    - by GrandMasterFlush
    I've just installed TFS2012 and am trying to create a new project in the default collection via Visual Studio 2012 but I keep getting this error message: TF218027: The following reporting folder could not be created on the server that is running SQL Server Reporting Services: /TfsReports/DefaultCollection. The report server is located at: http://<servername>/Reports. The error is: The permissions granted to user '<domain>/grandmasterflush' are insufficient for performing this operation.. Verify that the path is correct and that you have sufficient permissions to create the folder on that server and then try again. I've checked the permissions and my user is a member of the Project Collection Administrators and the Project Collection Administrators group has the 'Create new project' permission set to allow. The only thing I think it might be is that the user that I created during installation for the Sharepoint access and reports viewing does not have permission to write to the reports folder, however if I select "Do not configure a SharePoint site at this time" then I still get the error messages. I can't find the reports folder to check the permissions either. TFS is using an instance of SQL 2012 that was already on the machine when TFS was installed. Can anyone see what I'm doing wrong please?

    Read the article

  • Executing Oracle SQLPlus in a Powershell Invoke-Command statement against a remote machine

    - by Scott Muc
    We have a basic powershell script that attempts to execute SQLPlus.exe on a remote machine. The remote does not have Oracle Instant client installed, but we have bundled all the necesary dlls in a remote folder. For example we have sqlplus.exe and dependencies in the directory C:\temp\oracle. If I navigate to that path on the remote server and execute sqlplus.exe it runs just fine. I get the prompt for username. If I go: Invoke-Command -comp remote.machine.host -ScriptBlock { C:\temp\oracle\sqplus.exe } I get the following: Error 57 initializing SQL*Plus + CategoryInfo : NotSpecified: (Error 57 initializing SQL*Plus:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Error loading message shared library Thinking that it's potentially a PATH issue I tried the following: Invoke-Command -comp remote.machine.host -ScriptBlock { $env:ORACLE_HOME= "C:\temp\oracle"; $env:PATH = "$env:ORACLE_HOME; C:\temp\oracle\sqlplus.exe } This had the same result. The error code is not very helpful and is extremely frustrating since it does work when I log on to the machine. What is powershell remoting doing that's making this not work?

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • phpMyAdmin: The additional features for working with linked tables have been deactivated.

    - by The Disintegrator
    I'm getting this error in the main page of phpMyAdmin verson: 3.2.1deb1 The additional features for working with linked tables have been deactivated. To find out why click here. When I click the link I get this report. $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... not OK [ Documentation ] General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... not OK [ Documentation ] Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... not OK [ Documentation ] $cfg['Servers'][$i]['pdf_pages'] ... not OK [ Documentation ] Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... not OK [ Documentation ] Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... not OK [ Documentation ] SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... not OK [ Documentation ] Designer: Disabled I already used the script to create the tables. I assigned the permissions to the pma user. And everything is set in /etc/phpmyadmin/conf.inc.php But it's still not working... The tables are empty. I assume that they should have something. I'm interested in the relations an history features. Obviously I have read the documentation. Maybe something else is unsetting those values? Any toughs?

    Read the article

  • Adding data sources for unixODBC/isql on Mac OSX Lion

    - by NP01
    I have installed unixODBC from source and mysql-odbc connector from .dmg installer on Mac OSX Lion. This was done a while ago, and at that time I successfully installed a data source (let's call it foo). Now I am trying to add another data source (DSN). I've done this through both ODBC Manager and the command-line tool myodbc-installer given with the tar bundle of the mysql-odbc connector from the mysql website. An entry shows up in /Library/ODBC/odbc.ini, which looks like this: [ODBC Data Sources] bar = MySQL ODBC 5.1 Driver [ODBC] Trace = 0 TraceAutoStop = 0 TraceFile = TraceLibrary = [myodbc] Driver = /usr/local/lib/libmyodbc5.so SERVER = localhost PORT = 3306 [bar] Driver = /usr/local/lib/libmyodbc5.so Description = DATABASE = bar However, isql fails to find it: anitya:Preferences neil$ isql bar bar bar -v [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Weird thing is, the old DSN foo, which is not to be seen in /Library/ODBC/odbc.ini or /etc/odbc.ini, works fine: anitya:Preferences neil$ isql foo foo foo +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> I'm miffed about where the DSN entries need to be entered on OSX Lion to be found by isql. Thanks in advance for your help!

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • How can I resolve Oracle 11g XE connection failure straight after installation?

    - by d3vid
    I have just installed Oracle 11g XE on a Windows 7 VirtualBox VM, using all the default options. "Getting Started" fails When I click on Getting Started I get taken to http://127.0.0.1:8080/apex/f?p=4950 which fails. After some browsing I came across a suggestion to confirm the HTTP port, but I can't get this far, because I can't connect. connect system fails If I select Run SQL command line I get taken to a SQL prompt. I enter connect system and get prompted for a password. I enter the password. I immediately get the following error: ERROR: ORA-01033: ORACLE initialization or shutdown in progress Process ID: 0 Session ID: 0 Serial number: 0 Info: Start database This happens whether or not I run Start database first. (Start database just opens a Windows command prompt window.) Info: Windows services My Oracle services start as follows: Starting the manual services doesn't resolve the problem. Enabling and starting the disabled service doesn't resolve the problem. Is there something I haven't done? How can I resolve this connection error?

    Read the article

  • Linode - Centos 5.5 -

    - by Marcus West
    Hi, I rather foolishly undertook to install a control panel on a Linode. I opted to use CentOs 5.5 (either ordinary or 64 bit) but I am like a monkey playing a reward game... I have some idea of what I am doing, but not enough.... In certain areas I am hopeless....do I install Webmin/virtualmin, or ISP Config..... ISP Config 2 or 3? I would employ someone to help, but how do i find the right person? Where can i learn the ropes on all this? There seems to be no systematic training, and even when I try to research college courses in the UK, I am none the wiser as to where I could go to learn how to run a Linux server..... Has anyone any pointers? Right now I am looking at th esecurity aspects of the server.....rkhunter , denyhosts etc... Any advice on installing and maintaining these things? Cheers marcus

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Recommended SpamAssassin update channels?

    - by Timo Geusch
    I'm currently using SpamAssassin on a couple of mail servers that I look after. SpamAssassin runs in the context of amavisd-new on those servers and with the usual bunch of plugins (FuzzyOCR, DCC, pyzor, razor). Currently the servers are getting their rule updates from the default SpamAssassin update channel (updates.spamassassin.org). Overall the setup seems to be reasonably effective but some types of spam seem to wander right through it even though I've made repeated attempts at training spamassassin. My guesstimate is that about 85%-90% of the spam that gets through policyd-weight makes it through the filters and it's been getting a lot worse recently as spammers are getting better at working their way through filters. Can someone recommend additional sources of filters to make SpamAssassin more effective? So far I've found OpenProtect's update channel but are there others worth looking at?

    Read the article

  • Inexpensive Remote Assistance software?

    - by Jess
    Any recommendations for Remote Assistance software that does not require firewall modification for clients? To assist client with software problems and perform training, we currently use a tool called Remote Helpdesk to connect to their computers and guide them through the process. This tool was pretty cheap (~$400 onetime for 3 support staff), and worked great - the client's PC actually initiates the connection to us, so there's never any firewall issues (vs. Remote Desktop, VNC software, or many other similar tools). Unfortunately, the product doesn't work well with 64-bit O/S's and Vista in general (slows down by a factor of 10 or so). I am looking for alternatives that provide the same reverse connection capabilities to avoid firewall issues. The only solution I've found is WebEx's Remote Support, which is WAY too expensive ($449/month for us). Thanks for all the assistance!

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • How do you persuade users to abandon their personal folders?

    - by thing2k
    Towards the end of last year we started using Mimecast services, in particular their cloud base e-mail archiving. Since then we’ve been rolling out the Mimecast Services for Outlook (MSO) Add-in. We’ve informed the users that we will be give them training in the next few Months, and we do not require them to use it, but my boss stated that we are getting rid of Personal Folders (pst files), by putting them into Mimecast. Unsurprisingly this did cause something of a backlash. Though really who likes change. I know the IT reasons for getting rid of Personal Folders (inefficient, unreliable, single access, etc), but from an average user’s perspective, unless they have had one fail on them, they see them as simple and only way to archive e-mail when their 200Mb mailbox is full. So what can I say to the users, to get them to understand why Personal Folders are not the best solution?

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Find or restore wiki pages. (Computer will not boot anymore and need the wiki pages)

    - by Nathan187
    A few years ago, where I work, I created a wiki for me and my co-workers. We work on a lot of old programs and to help with cross training, we put a lot of our notes in the wiki. Sadly, the wiki was hosted on my machine and my machine has died. I can pull the drive out and hook it up to an enclosure and still see the files, etc. I want to know...is there a way to get the files/pages from that wiki somehow. I think they are stored in a mysql database somewhere. Yeah it sucks and I had a lot of stuff on that drive but the most important thing for me now is to get those notes (wiki pages). Any help would be appreciated.

    Read the article

  • Executing Oracle SQLPlus in a Powershell Invoke-Command statement against a remote machine

    - by Scott Muc
    We have a basic powershell script that attempts to execute SQLPlus.exe on a remote machine. The remote does not have Oracle Instant client installed, but we have bundled all the necesary dlls in a remote folder. For example we have sqlplus.exe and dependencies in the directory C:\temp\oracle. If I navigate to that path on the remote server and execute sqlplus.exe it runs just fine. I get the prompt for username. If I go: Invoke-Command -comp remote.machine.host -ScriptBlock { C:\temp\oracle\sqplus.exe } I get the following: Error 57 initializing SQL*Plus + CategoryInfo : NotSpecified: (Error 57 initializing SQL*Plus:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Error loading message shared library Thinking that it's potentially a PATH issue I tried the following: Invoke-Command -comp remote.machine.host -ScriptBlock { $env:ORACLE_HOME= "C:\temp\oracle"; $env:PATH = "$env:ORACLE_HOME; C:\temp\oracle\sqlplus.exe } This had the same result. The error code is not very helpful and is extremely frustrating since it does work when I log on to the machine. What is powershell remoting doing that's making this not work?

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • Workaround to extend limited screen real-estate on Windows?

    - by Brian
    I need a means to use a software tool that requires at least 900 pixels of vertical resolution (as in, the "OK" button to save settings won't be reachable on smaller displays) on a laptop/projector with only 768 pixels of vertical resolution for a training session. So far the only workaround that's been suggested is to memorize the number of tab stops to reach the "OK" and "Cancel" buttons. Any suggestions on a better workaround? What I'd like to see is a utility that would let me treat the physical display as a 1024x768 view port into a larger, virtual display area. Does anything like that exist? Anything else that might help?

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

< Previous Page | 897 898 899 900 901 902 903 904 905 906 907 908  | Next Page >