Daily Archives

Articles indexed Tuesday December 4 2012

Page 1/16 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Finding the A Record for a Home Server [closed]

    - by Ryan Allred
    I have a hosted website that allows me to add subdomains and point them to different locations on the server. I also know that I can change the 'A Record' to point it to a different IP address. Now, here's the question, I have a home server that I need to access though this hosted website's domain name. How would I go about setting up the 'A Record' on the server to point to my home server. I'm usually pretty good about keyword searches but on this one, I'm pretty lost as to what to search for.

    Read the article

  • It it possible to have multiple ReWrite rules that all do the same Action, for an IIS7.5 webserver?

    - by Pure.Krome
    I've got rewrite module working great for my IIS7.5 site. Now, I wish to add a number of urls that all goto an HTTP 410-Gone status. Eg. <rule name="Old Site = image1" patternSyntax="ExactMatch" stopProcessing="true"> <match url="image/loading_large.gif"/> <match url="image/aaa.gif"/> <match url="image/bbb.gif"/> <match url="image/ccc.gif"/> <action type="CustomResponse" statusCode="410" statusReason="Gone" statusDescription="The requested resource is no longer available" /> </rule> but that's invalid - the website doesn't start saying there's a rewrite config error. Is there another way I can do this? I don't particularly want define a single URL and ACTION for each url.

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • How do you handle reboots?

    - by Mart
    We have one VPS (Windows 2008 R2+IIS7.5), with an asp.net mvc 3 application. The main question is: how to handle issues when Windows needs to reboot? (after installing Windows Updates or anything else). The goal is to make the website 24/7, but first it's ok to show a message to the users. (we'll be back soon, something like app_offline.htm) Our application uses SQL and also writes/reads some files (uploaded photos, documents) which are not stored in SQL. What do you recommend? Load balancing with ARR? (with 1+2 servers, but what if the front-end server needs reboot?) Windows failover cluster? SQL failover cluster? What to do with uploaded files? I really don't know what would be the best (and simplest) solution.

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Windows Server Backup 2012 error - "The version does not support this version of the file format."

    - by Cylindric
    I'm trying out Windows Server 2012, specifically Windows Server Backup, as it would be a very useful way to see us through until we can upgrade our 'real' backup system. I'm backing up to a network share, and a Server 2008 WSB works fine. On the 2012 server however, I get an error: Backup of volume \?\Volume{298d1a7d-f80f-11e1-93e8-806e6f6e6963}\ has failed. The version does not support this version of the file format. It's a VHDX written by WSB, so I'm not sure what version it's complaining about. I can see a whole bunch of files in the destination, so I don't think it's a simple authentication issue, but I only get about 8Mb of VHDX written.

    Read the article

  • Torque jobs does not enter "E" state (unless "qrun")

    - by Vi.
    Jobs I add to the queue stays there in "Queued" state without attempts to be executed (unless I manually qrun them) /var/spool/torque/server_logs say just 04/11/2011 12:43:27;0100;PBS_Server;Job;16.localhost;enqueuing into batch, state 1 hop 1 04/11/2011 12:43:27;0008;PBS_Server;Job;16.localhost;Job Queued at request of test@localhost, owner = test@localhost, job name = Qqq, queue = batch The job requires just 1 CPU on 1 node. # qmgr -c "list queue batch" Queue batch queue_type = Execution total_jobs = 0 state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 max_running = 3 acl_host_enable = True acl_hosts = localhost resources_min.ncpus = 1 resources_min.nodect = 1 resources_default.ncpus = 1 resources_default.nodes = 1 resources_default.walltime = 00:00:10 mtime = Mon Apr 11 12:07:10 2011 resources_assigned.ncpus = 0 resources_assigned.nodect = 0 kill_delay = 3 enabled = True started = True I can't set resources_assigned to nonzero because of Cannot set attribute, read only or insufficient permission resources_assigned.ncpus. When I qrun some task, this goes to mom's log: 04/11/2011 21:27:48;0001; pbs_mom;Svr;pbs_mom;LOG_DEBUG::mom_checkpoint_job_has_checkpoint, FALSE 04/11/2011 21:27:48;0001; pbs_mom;Job;TMomFinalizeJob3;job 18.localhost started, pid = 28592 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;scan_for_terminated: job 18.localhost task 1 terminated, sid=28592 04/11/2011 21:27:48;0008; pbs_mom;Job;18.localhost;job was terminated 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;top of preobit_reply 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;DIS_reply_read/decode_DIS_replySvr worked, top of while loop 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;in while loop, no error from job stat 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;obit sent to server Scheduler log (/var/spool/torque/sched_logs/20110705): 07/05/2011 21:44:53;0002; pbs_sched;Svr;Log;Log opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;TokenAct;Account file /var/spool/torque/sched_priv/accounting/20110705 opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;main;/usr/sbin/pbs_sched startup pid 16234 qstat -f: Job Id: 26.localhost Job_Name = qwe Job_Owner = test@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Tue Jul 5 21:43:31 2011 Error_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.e26 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Jul 5 21:43:31 2011 Output_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.o26 Priority = 0 qtime = Tue Jul 5 21:43:31 2011 Rerunable = True Resource_List.ncpus = 1 Resource_List.neednodes = 1:ppn=1 Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 00:01:00 substate = 10 Variable_List = PBS_O_HOME=/home/test,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=test, PBS_O_PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games, PBS_O_MAIL=/var/mail/test,PBS_O_SHELL=/bin/sh,PBS_SERVER=127.0.0.1, PBS_O_WORKDIR=/home/test/jscfi/default/0.738784810485275, PBS_O_QUEUE=batch,PBS_O_HOST=localhost euser = test egroup = test queue_rank = 1 queue_type = E etime = Tue Jul 5 21:43:31 2011 submit_args = run.pbs Walltime.Remaining = 6 fault_tolerant = False How to make it execute jobs automatically, without manual qrun?

    Read the article

  • Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • File permission set to 644 and wordpress cannot access them?

    - by Joel
    I'm having problems with WordPress not be able to access files. When installing certain themes and plugins, it comes up with an error saying that it cannot create the directory. When I try to edit style.css it says that I need to make that file writable. The file permissions were set to 644. It wouldn't work until I changed the settings to 777 or 776. WordPress was installed by our local ISP. Anyone got any ideas? It seems that WordPress has not been setup properly. Is there anyway I can fix this without reinstalling the whole thing?

    Read the article

  • What's the best way to telnet from a remote Windows PC without using RDP?

    - by Rob D.
    Three Networks: 10.1.1.0 - Mine 172.1.1.0 - My Branch Office 172.2.2.0 - My Branch Office's VOIP VLAN. My PC is on 10.1.1.0. I need to telnet into a Cisco router on 172.2.2.0. The 10.1.1.0 network has no routes to 172.2.2.0, but a VPN connects 10.1.1.0 to 172.1.1.0. Traffic on 172.1.1.0 can route to 172.2.2.0. All PCs on 172.1.1.0 are running Windows XP. Without disrupting anyone using those PCs, I want to open a telnet session from one of those PCs to the router on 172.2.2.0. I've tried the following: psexec.exe \\branchpc telnet 172.2.2.1 psexec.exe \\branchpc cmd.exe telnet 172.2.2.1 psexec.exe \\branchpc -c plink -telnet 172.2.2.1 Methods 1 and 2 both failed because telnet.exe is not usable over psexec. Method 3 actually succeeded in creating the connection, but I cannot login because the session registers my carriage return twice. My password is always blank because at the "Username:" prompt I'm effectively typing: Routeruser[ENTER][ENTER] It's probably time to deploy WinRM... Does anyone know of any other alternatives? Does anyone know how I can fix plink.exe so it only receives one carriage return when I use it over psexec?

    Read the article

  • Why are symbolic links not working in MySQL?

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • Is using Windows Update to find Drivers for laptop good enough?

    - by user22105
    I'm trying to install drivers for my laptop after fresh install of Windows 8. From the manufacturer's website, there are many drivers and utilities available. But after running Windows Update(which finds drivers for the hardware on my laptop), I only had to install a few (the ones with Unidentified Device) I'm wondering if this is good enough? or would I skip some important drivers like Chipset Thermo management etc etc that could potentially cause harm to my laptop?

    Read the article

  • SketchUp: Weird regions in sketch

    - by Josh M.
    I'm using SketchUp 8.0.15158. My sketch has weird regions/lines in it that I can't explain or get rid of. I've redrawn almost the whole thing and the lines persist! See here, highlighted in yellow: These dotted lines only show up when I triple-click the back face to select the whole object. Here's what the sketch looks like without selections: What are these lines and how can I get rid of them? They are causing weird issues that I can't seem to get around.

    Read the article

  • IIS8 behind a VPN + Windows Server 2012 - how to properly bind IP+Port

    - by ryugen
    This is my first question so I hope I'm going to give you enough information. I'm running Windows Server 2012 within the Hyper-V environment of my Windows 8 machine. Within Windows Server 2012 I'm running a VPN tool based on openVPN to hide my real IP. When I run IIS8 with the VPN disconnected it works flawlessly through the Internet (port 80 forwarded correctly). But as soon as I connect to the VPN I can't reach my site through the domain anymore. Now I tried basically everything I know which is why I'm asking you guys. I tried binding IIS8 to the IP of my virtual ethernet card. I tried changing the priority of the NIC through the "Network and sharing center" via the advanced tab. I used ipconfig /flushdns in case there was something wrong in the DNS handling. Hell, I even turned off the Windows firewall. I also used a port scanner to verify the problem. The webserver is reachable on port 80 with VPN disconnected and immediately gets unreachable on connect. Theoretically both IPs (my regular one AND the VPN) should be reachable or at least not impair the other one right? Do you have any other suggestion? Do I have to route something somewhere somehow?

    Read the article

  • Postgresql 9.2 where is the initdb located on Ubuntu

    - by thanikkal
    I am trying to install postgres on EC2 / EBS. I am following this article and stuck at the following step. sudo su - su postgres - /usr/pgsql-9.0/bin/initdb -D /pgdata I cant find the initdb command located at the stated location, matter of fact i cant find the pgsql* directory at all under /usr folder. Was this changed for Postgres 9.2 or is there an alternate command that would help me initdb? edit 1: I know the folder pgsql-9.0 is version specific, so i was expecting to see more like pgsql-9.2 or similar.

    Read the article

  • How to get ASUS UEFI BIOS EZ Mode utility on my asus laptop?

    - by Manoj Kumar
    i have asus laptop with the following specs: Manufacturer ASUSTeK Computer Inc. Model K53SC (CPU 1) Version 1.0 Chipset Vendor IntelBIOS Brand American Megatrends Inc. Version K53SC.208 Chipset Model Sandy Bridge BIOS Brand American Megatrends Inc. Version K53SC.208 Operating System MS Windows 7 Ultimate 64-bit In the BIOS setting i have enabled 'UEFI' but im not unable to see the graphical interface of UEFI.

    Read the article

  • Outlook 2010 keeps losing the search index for emails

    - by Igor K
    Hoping someone can help here, this is driving me insane. Outlook 2010 keeps losing the search index so when I search for an email it has the yellow bar saying: search results may be incomplete because items are still being indexed Clicking on the bar says eg: 49200 items remaining to be indexed If it makes any difference, this is an IMAP account. If I leave Outlook open all day it will eventually index everything. But then say a week/month later it happens all again.

    Read the article

  • Enter key on Mac OS X renames files instead of launching

    - by Matt
    On my Mac when I hit the "enter" key on a file on my desktop or in the finder, it enters the "rename file" mode. 1) How do you actually "launch" or "open" (i.e. double click) the file with only the keyboard? In Windows, the Enter key does this. 2) Is there a way to map the Enter key to do this in Mac instead of rename? Honestly, it makes little sense to me. I mean, how often do you rename something? Once, maybe. How often do you open something? Lots and lots of times.

    Read the article

  • How can I play an online video in full screen AND type in a text editor at the same time?

    - by Edward Tanguay
    I watch instructional videos that play in Silverlight such as the ones at http://windowsclient.net/learn. I watch them in full screen on my left monitor. While the video is playing, I want to be able to type notes in a text editor on my right monitor. However, as soon as I press a key, the video exits full screen mode. How can I force the Silverlight player to stay in full screen mode even as I type in another application?

    Read the article

  • SQL SERVER – Fix: Error : 402 The data types ntext and varchar are incompatible in the equal to operator

    - by pinaldave
    Some errors are very simple to understand but the solution of the same is not easy to figure out. Here is one of the similar errors where it clearly suggests where the problem is but does not tell what is the solution. Additionally, there are multiple solutions so developers often get confused with which one is correct and which one is not correct. Let us first recreate scenario and understand where the problem is. Let us run following USE Tempdb GO CREATE TABLE TestTable (ID INT, MyText NTEXT) GO SELECT ID, MyText FROM TestTable WHERE MyText = 'AnyText' GO DROP TABLE TestTable GO When you run above script it will give you following error. Msg 402, Level 16, State 1, Line 1 The data types ntext and varchar are incompatible in the equal to operator. One of the questions I often receive is that voucher is for sure compatible to equal to operator, then why does this error show up. Well, the answer is much simpler I think we have not understood the error message properly. Please see the image below. The next and varchar are not compatible when compared with each other using equal sign. Now let us change the data type on the right side of the string to nvarchar from varchar. To do that we will put N’ before the string. USE Tempdb GO CREATE TABLE TestTable (ID INT, MyText NTEXT) GO SELECT ID, MyText FROM TestTable WHERE MyText = N'AnyText' GO DROP TABLE TestTable GO When you run above script it will give following error. Msg 402, Level 16, State 1, Line 1 The data types ntext and nvarchar are incompatible in the equal to operator. You can see that error message also suggests that now we are comparing next to nvarchar. Now as we have understood the error properly, let us see various solutions to the above problem. Solution 1: Convert the data types to match with each other using CONVERT function. Change the datatype of the MyText to nvarchar. SELECT ID, MyText FROM TestTable WHERE CONVERT(NVARCHAR(MAX), MyText) = N'AnyText' GO Solution 2: Convert the data type of columns from NTEXT to NVARCHAR(MAX) (TEXT to VARCHAR(MAX) ALTER TABLE TestTable ALTER COLUMN MyText NVARCHAR(MAX) GO Now you can run the original query again and it will work fine. Solution 3: Using LIKE command instead of Equal to command. SELECT ID, MyText FROM TestTable WHERE MyText LIKE 'AnyText' GO Well, any of the three of the solutions will work. Here is my suggestion if you can change the column data type from ntext or text to nvarchar or varchar, you should follow that path as text and ntext datatypes are marked as deprecated. All developers any way to change the deprecated data types in future, it will be a good idea to change them right early. If due to any reason you can not convert the original column use Solution 1 for temporary fix. Solution 3 is the not the best solution and use it as a last option. Did I miss any other method? If yes, please let me know and I will add the solution to original blog post with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Supply Chain, S&OP, & TPM Analyst Reports from Gartner, IDC Now Available

    - by Mike Liebson
    Check out these analyst reports Oracle has recently made available for customers and partners on Oracle.com: Gartner:  MarketScope for Stage 3 Sales and Operations Planning  -  Gartner lead supply chain planning analyst, Tim Payne, discusses the evolving definition of S&OP, the Gartner S&OP maturity model, and recommendations for selecting S&OP technology solutions. Gartner: Vendor Panorama for Trade Promotion Management in Consumer Goods  -  Consumer goods analyst, Dale Hagemeyer, presents an overview of the TPM market, followed by an analysis of vendor offerings. IDC:  Perspective: Oracle OpenWorld 2012 — Supply Chain as a Focus  -  Supply chain analyst, Simon Ellis, discusses supply chain highlights from the October OpenWorld conference. Value Chain Planning highlights include the VCP product roadmap and demand sensing presentations by Electronic Arts (Demantra) and Sony (Demand Signal Repository). For a complete set of analyst reports, visit here.

    Read the article

  • 'rman' cheat-sheet and rlwrap completion

    - by katsumii
    I started using 'rlwrap' some monthes ago like one of my colleague does.bash-like features in sqlplus, rman and other Oracle command line tools (Oracle Luxembourg Core Tech' Blog by Gilles Haro)One can find specific Oracle extension for databases 9i, 10g and 11g (keyword textfile) over here. This will avoid you the need to create this .oracle_keywords file.There is 'rman' keyword file in the link above. I experimented a little and found some missing keywords which are:MAXCORRUPTION PRIMARY NOCFAU VIRTUAL COMPRESSION FOREIGN With these words added, 'rman' works like this:$ rlwrap -f ~/rman $ORACLE_HOME/bin/rman Recovery Manager: Release 11.2.0.3.0 - Production on Mon Dec 3 02:56:04 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> <-- Hit TAB Display all 211 possibilities? (y or n) As you can guess, this completion is not context aware.I found these missing words by creating a kind of 'cheat sheet' for rman with the script like below. This sheet contains list of verbs and 1st operands. I uploaded to here so one can create a coffee cup with a lot of esoteric words printed on :)validWords() { sed -n 's/^RMAN-01009: syntax error: found "identifier": expecting one of: //p' \ | sed -r 's/double-quoted-string, single-quoted-string/Some String/;s/, /" "/g;s/""//' } echo "Bogus" | rman | validWords > /tmp/rman.$$ for i in $(cat /tmp/rman.$$) do i=$(echo $i | tr -d '"') echo "#### $i ####" echo "$i Bogus" | rman | validWords done One can find more keywords in the document here.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >