Search Results

Search found 99017 results on 3961 pages for 'server side events'.

Page 1201/3961 | < Previous Page | 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208  | Next Page >

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • bind: blackhole for invalid recursive queries?

    - by Udo G
    I have a name server that's publicly accessible since it is the authoritative name server for a couple of domains. Currently the server is flooded with faked type ANY requests for isc.org, ripe.net and similar (that's a known distributed DoS attack). The server runs BIND and has allow-recursion set to my LAN so that these requests are rejected. In such cases the server responds just with authority and additional sections referring the root servers. Can I configure BIND so that it completely ignores these requests, without sending a response at all?

    Read the article

  • How to add exception for backup MX to tumgreyspf?

    - by Waleed Hamra
    I have an Ubuntu raring server running postfix/dovecot as an email server, with tumgreyspf doing greylisting and SPF checks. My problem is that I also have a backup MX server, that is supposed to store my emails temporarily, should my main server ever fails. It usually rejects receiving emails if it finds the main server online and functional. The problem is when it does need to do its job, tumgreyspf rejects all emails from the backup MX with an error like this: Jun 27 16:18:13 hamra postfix/smtpd[28732]: NOQUEUE: reject: RCPT from mxbackup.mydomain.com[x.x.x.x]: 550 5.7.1 <[email protected]>: Recipient address rejected: QUEUE_ID="" SPF Reports: 'SPF fail - not authorized'; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<mxbackup.mydomain.com> any ideas?

    Read the article

  • Symantec CPS / Backup Exec 11D Service stuck in "Starting" Status

    - by user42289
    I have two Windows 2003 (one is SE, one is SBS) both SP2, both are Virtual Machines of Microsoft Virtual Server 2005 R2. All of a sudden about 2 weeks ago, the Symantec Backup Exec / CPS 11D stopped working on them. One is the Media server, one is our Exchange 2003 Server. There is another copy of CPS on our file server that the service is running fine on. However the one that is fine is not a VM. When I say stop working, the "backup exec continuous protection agent" service is stuck in "starting" status. On the non Exchange server I've tried uninstalling the last Windows Updates that were run some time around the time of failure. I've tried repairing the install of CPS. I've tried uninstalling it and reinstalling. Exact same problem in the end.

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • SQL SERVER FIX : ERROR : 4214 BACKUP LOG cannot be performed because there is no current database b

    I recently got following email from one of the reader.Hi Pinal,Even thought my database is in full recovery mode when I try to take log backup I am getting following error.BACKUP LOG cannot be performed because there is no current database backup. (Microsoft.SqlServer.Smo)How to fix it?Thanks,[name and email removed as requested]Solution / Fix:This error can [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Getting the right results with bcp and DTS with multiple versions of SQL Server installed.

    - by fatherjack
    I was using SSIS for the first time on an instance the other day and came across this error when I executed a package Package migration from version 3 to version 2 failed with error 0xC001700A. The version number in the package is not valid. The version number cannot be greater than current version number. This was a pain and wasn't something that I was expecting, however, the error message made sense - the package was being executed by the wrong version of the executable. Not impossible to...(read more)

    Read the article

  • How to control fan speed and temperatures on Asus A8Js laptop running Ubuntu Server?

    - by Azeworai
    Hi, I have tried installing asusfan and lm-sensors but I'm unable to control my fans to cool my laptop down sufficiently. Currently it overheats at about 100 degrees celsius and my sensors output somehow does not have any fan information on it: jackson@OLYMPIA:~$ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +69.0°C (crit = +110.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +66.0°C (high = +100.0°C, crit = +100.0°C) I have checked my bios and there isn't any fan settings there. I can consistently overheat just by converting a video via Handbrake. I have ubuntu-desktop installed for a GUI. Is there a way for me to control my fans to start spinning before it reaches a critical temperature and kills itself?

    Read the article

  • How can I set up OpenVPN to accept more than 60 connections?

    - by Robin
    Greetings! We're using OpenVPN and today hit an unexpected connection limit of 60 - even though max-clients is set to the source code default 1024. Server log: Tue Dec 21 13:49:41 2010 MULTI: new incoming connection would exceed maximum number of clients (60) We're slowly adding new clients to the VPN and expect to hit 200 some time next year, if we can get it working. We're running the server on a Win2003 R2. OpenVPN 2.0.9 Server config as follows: local 192.168.10.211 port 1195 proto tcp dev tun dev-node OpenVPN_Vision ca vision_ca.crt cert vision_server.crt key vision_server.key # This file should be kept secret dh vision_dh1024.pem server 192.168.211.0 255.255.255.0 ifconfig-pool-persist vision_ipp.txt ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 ;client-to-client keepalive 10 120 comp-lzo ;max-clients 100 # Default in source code is 1024 persist-key persist-tun status openvpn-status-vision.log log vision.log verb 3 I would greatly appreciate any help or input on this one. Thanks! Best regards, Robin

    Read the article

  • Switching PHP to FastCGI from mod_php broke AMFPHP

    - by wezzy
    Hi, i've just switched my debian server from mod_php to fastcgi following this tutorial everything goes right but now i've found that one of the hosted application that using AMFPHP for flash remoting is broken. I'm trying to understand what's happend. Looking at it with FireBug and FireAMF it seems that the responses has a content but the Flash callbacks never get called and if i try to open the service browser it displays this error: (mx.rpc::Fault)#0 errorID = 0 faultCode = "Client.Error.RequestTimeout" faultDetail = "The request timeout for the sent message was reached without receiving a response from the server." faultString = "Request timed out" message = "faultCode:Client.Error.RequestTimeout faultString:'Request timed out' faultDetail:'The request timeout for the sent message was reached without receiving a response from the server.'" name = "Error" rootCause = (null) It's strange it seems that the server takes a long time to responde, then (in the service browser) flash made a new call to the server and the old one get a response. Some problem with sessions ? Really no idea ....

    Read the article

  • Accessing network resources via vpn connection failes

    - by LikeHoo
    I already found some information on this problem here, but I still can't get it to work. Im trying to access some network resources on my server via vpn over the net. Im using a win7 home pc here and a win server 2008 rc2 with installed ras&routing role on the server. For vpn authentication I use a local user on the server with vpn-access. This user also has the rights to access to the network resources, but it neither finds the server under network nor is it able to connect the network drives... In similar topics here I found out that using the same credentials for vpn-authentication and network resources access does not work, but using different user for access didn't work either. All of these examples I found were in an active directory structure, but I don't have an active directory here. Does anyone know how to solve this problem without having to use an active directory? Thanks

    Read the article

  • Supermicro MBD-X9DRL-IF-B and 8 X Hynix 4Gb DDR3 ECC dont boot

    - by Avi Keinan
    I have a custom build server with 2 Xeon E5-2609 and Supermicro MBD-X9DRL-IF-B. Right now the server has 4X Hynix 4Gb 2Rx8 PC3L-10600R-9-10-B0 HMT351R7BFR8A. and its working great. We baught another 4 from the same series of memory, when we installed all the ram (4 sticks already installed and we added 4 new sticks) the server didn't boot. When we replaced the current memory with the new memory - the server did boot. Right now we are in a problem because we need to extend this server ram but the motherboard don't run with 8X4Gb. Any ideas why? I have attached a picture with one of the current memory stick and one of the new memory stick. Thanks in advence.

    Read the article

  • Test tomcat for ssl renegotiation vulnerability

    - by Jim
    How can I test if my server is vulnerable for SSL renegotiation? I tried the following (using OpenSSL 0.9.8j-fips 07 Jan 2009: openssl s_client -connect 10.2.10.54:443 I see it connects, it brings the certificate chain, it shows the server certificate, and last: SSL handshake has read 2275 bytes and written 465 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 50B4839724D2A1E7C515EB056FF4C0E57211B1D35253412053534C4A20202020 Session-ID-ctx: Master-Key: 7BC673D771D05599272E120D66477D44A2AF4CC83490CB3FDDCF62CB3FE67ECD051D6A3E9F143AE7C1BA39D0BF3510D4 Key-Arg : None Start Time: 1354008417 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) What does Secure Renegotiation IS supported mean? That SSL renegotiation is allowed? Then I did but did not get an exception or get the certificate again: verify error:num=20:unable to get local issuer certificate verify return:1 verify error:num=27:certificate not trusted verify return:1 verify error:num=21:unable to verify the first certificate verify return:1 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Length: 174 Date: Tue, 27 Nov 2012 09:13:14 GMT Connection: close So is the server vulnerable to SSL renegotiation or not?

    Read the article

  • Store image as logic file (in db by using binary format) or physical file (in the server)

    - by Michel Ayres
    In those study cases of image storage, An image that change only once in a while, if it changes at all (like an image for an article) The image case from above is not only one image but over 10, that link to the same article An image that have changes very often (like a banner image for a website) The image above is huge What is the best approach for each case? What is the "right/faster" way to do this task in each scenario ?

    Read the article

  • problem with MySQL installation : template configuration file cannot be found

    - by user35389
    Trying to install MySQL onto the Windows XP machine. While going through the installation steps (in the "MySQL Server Instance Config. Wizard"), I get to a point where it the window reads: MySQL Server Instance Configuration (bold header) Choose the configuration for the server instance. Ready to execute... o Prepare configuration o Write configuration file o Start service o Apply security settings (this line is greyed out) Please press [Execute] to start the configuration. [ Back ] [ Execute ] [ Cancel ] So I press execute, and then a red X appears in the second step: Write configuration file and at the bottom, where it originally said: Please press [Execute] to start the configuration. It now says: The template configuration file cannot be found at C:\Program Files\MySQL\MySQL Server 5.0\bin\my-template.cnf I'm unsure what it means, but I canceled the config wizard and looked in the directory that had been created (C:\Program Files\MySQL\MySQL Server 5.0). There are some configuration settings files, and there are 4 folders: bin data Docs share

    Read the article

  • Debugging Connection Issues Between Two Linux Servers

    - by clickfault
    I have two CentOS 5 servers running iptables and apf. I am having issues connecting with ssh from server 1 to server 2. I can connect from server 1 to a third server and from that third server to both 1 and 2. In all cases I am using the IP address and not a host name. I have stopped iptables and apf on all servers and it doesn't seem to change anything. What is the best way to debug this process?

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • At what point should data be sent back to server?

    - by whamsicore
    A good example would be the stackexchange "rate" button. When a post is upvoted the arrow changes color immediately. However there is a grace period for one to edit one's vote decision (oops! voted by mistake?). Is the upvote action processed immediately or does is only process after a set time period, or when the user leaves the page? How exactly is this rating processed? What is the standard for handling dynamic page edits (e.g. stackexchange rating, facebook posts?)

    Read the article

  • Outlook Anywhere remote https connection issue

    - by holian
    We have SBS 2003, and we use DYNDNS. We forward dyndns address 443 to local server ip 443 port. mycompany.dyndns.org:443 -- server.mycompany.local:443 In android phone i can check my mails with Outlook Active Snyc. From remote machine i can check my mails in owa (https://mycompany.dyndns.org/exchange) But i can't set up outlook 2013 to remote connect. I installed server.mycompany.local to remote machine trusted cert container, but i got error message: "There is a problem with the proxy server's security certificate. The name on the security certificate is invalid or does not match the name of the target site. Outlook is unable to connect to the proxy server. (Error Code 10)" Is it possible to connect exchange, via dnydns? Whats the problem? Thank you

    Read the article

  • can not connect via SSH to a remote Postgresql database

    - by tartox
    I am trying to connect via pgAdmin3 GUI to a Postgresql database on a remote server myHost on port 5432. Server side : I have a Unix myUser that match a postgresql role. pg_hba.conf is : local all all trust host all all 127.0.0.1/32 trust Client side : I open an ssh tunnel : ssh -L 3333:myHost:5432 myUser@myHost I connect to the server via pgAdmin3 ( or via psql -h localhost -p 3333 ). I get the following error message : server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I have tried to access a specific database with the superuser role using psql -h localhost -p 3333 --dbname=myDB --user=mySuperUser with no more success. What did I forget in the setup ? Thank you

    Read the article

  • Why do servers go down after a lot of traffic?

    - by mohabitar
    I'm working on an iOS app that makes extensive use of databases, where users will be able to sync their data to a server. However, I'm terrified of the event that if too many users start using the app, the servers will no longer be able to handle it. I'm not a server guy at all and am not too familiar with how that works, but my question is, why do servers get overloaded and how can that be prevented? Does it have to do with who my server host is? Or is it about the efficiency of my code? If my host is a reliable server, such as Amazon AWS, am I still at risk for server problems? Bottom line is, does it have to do with the way I implement my code, or does it have to do with who my host is?

    Read the article

  • Alias a Virtual Directory or Application as Root on IIS 7

    - by manyxcxi
    Our current IIS setup as two applications running on different paths at (for example) http://server/sub-a and http://server/sub-b. I want to alias http://server/sub-a as root so that just going to http://server/ will bring up the contents of sub-a. The problem I face is that when I initially set up a ReverseProxy it negatively affected http://server/sub-b. I know this is a fairly common problem- how have you solved it? 99.9% of my experience is with Apache, so I feel a tad lost in the GUI world of IIS.

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

< Previous Page | 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208  | Next Page >