Search Results

Search found 12824 results on 513 pages for 'glen little'.

Page 7/513 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • SQL Azure Roadmap gets a little clearer &ndash; announcements from Tech Ed

    - by Eric Nelson
    On Monday at Tech?Ed 2010 we announced new stuff (I like new stuff) that “showcases our continued commitment to deliver value, flexibility and control of data through data cloud services to our customers”. Ok, that does sound like marketing speak (and it is) but the good news is there is some meat behind it. We have some decent new features coming and we also have some clarity on when we will be able to get our hands on those features. SQL Azure Business Edition Extends to 50 GB – June 28th SQL Azure Business Edition database is now extending from 10GB to 50GB The new 50GB database size will be available worldwide starting June 28th SQL Azure Business Edition Subscription Offer – August 1st Starting August 1st, we will have a new discounted SQL Azure promotional offer (SQL Azure Development Accelerator Core) More information is available at http://www.microsoft.com/windowsazure/offers/. Public Preview of the Data Sync Service  - CTP now Data Sync Service for SQL Azure allows for more flexible control over data by deciding which data components should be distributed across multiple datacenters in different geographic locations, based on your internal policies and business needs.  Available as a community technology preview after registering at http://www.sqlazurelabs.com SQL Server Web Manager for SQL Azure - CTP this Summer SQL Server Web Manager (SSWM) is a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer. Access 10 Support for SQL Azure – available now Yey – at last! Microsoft Office 2010 will natively support data connectivity to SQL Azure – we can now start developing those “departmental apps” with the confidence of a highly available SQL store provisioned in seconds. NB: I don’t believe we will support any previous versions of Access talking to SQL Azure. The Pre-announced Spatial Data Support to Become Live – Live now* At MIX in March we announced spatial was coming and apparently it is now here - although I need to check. Related Links UK based? Sign up at http://ukazure.ning.com SQL Azure Team Blog http://blogs.msdn.com/b/sqlazure/

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • Sites with overlapping code-bases. Developing multiple sites with little changes

    - by Web Developer
    I have to develop 3 different sites video.com for hosting video audio.com for hosting audio docs.com for hosting docs. domain names for example only Almost 80% of the functionality is the same for all the three, with remaining 20% being completely different features... How do I handle this? How does sites like SO handle this? I am developing this in YII framework and was thinking of having these different features as modules but in this case the menu/code links in html code can become difficult.

    Read the article

  • How do I express subtle relationships in my data?

    - by Chuck H
    "A" is related to "B" and "C". How do I show that "B" and "C" might, by this context, be related as well? Example: Here are a few headlines about a recent Broadway play: 1 - David Mamet's Glengarry Glen Ross, Starring Al Pacino, Opens on Broadway 2 - Al Pacino in 'Glengarry Glen Ross': What did the critics think? 3 - Al Pacino earns lackluster reviews for Broadway turn 4 - Theater Review: Glengarry Glen Ross Is Selling Its Stars Hard 5 - Glengarry Glen Ross; Hey, Who Killed the Klieg Lights? Problem: Running a fuzzy-string match over these records will establish some relationships, but not others, even though a human reader could pick them out from context in much larger datasets. How do I find the relationship that suggests #3 is related to #4? Both of them can be easily connected to #1, but not to each other. Is there a (Googlable) name for this kind of data or structure? What kind of algorithm am I looking for? Goal: Given 1,000 headlines, a system that automatically suggests that these 5 items are all probably about the same thing. To be honest, it's been so long since I've programmed I'm at a loss how to properly articulate this problem. (I don't know what I don't know, if that makes sense). This is a personal project and I'm writing it in Python. Thanks in advance for any help, advice, and pointers!

    Read the article

  • The mouse pointer in my Ubuntu VM has turned into a little hand with a document, and clicks are igno

    - by Daryl Spitzer
    The mouse pointer in my Ubuntu 8.04.3 LTS VM (running in VMware Fusion) has changed into a little hand holding a document. It doesn't show up in screen-shots. All mouse clicks (left or right) are ignored. But I can still type in the one Terminal window I have open. (And commands work fine.) I wonder if I'm in some kind of drag-and-drop mode. How do I get out of this? Update: Rebooting (from the command-line) worked. Ubuntu came up with the regular mouse-pointer.

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Open source C program with little requirements and at least 2MB binary size.

    - by max
    Hi, I'm developing a operating system and I need a test program (function of any kind) to test certain internal features. I cannot find any appropriate program to do this job. Probably one of you knows one. The program should be open source, written in C with very little user library usage (only file IO, pthreads, stdio, stdlib preferred) and must have a binary size of at least 2MB. Thanks for any suggestions.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • How to rsync a large file, with as little CPU and bandwidth expense as possible?

    - by Johan Allgoth
    I have a 500 GB file that I plan on backing up remotely. The file changes often. I'll be rsyncing it from a desktop to a server. Both can run rsync client or server. What is the proper command for this? The ones I've tried sofar has been taking forever or simply acted strange. Example and results: rsync -cv --partial --inplace --no-whole-file /desktop/file1 myserver.com::module/file1 Seems to work, but only if I do it twice (?!). Also, slow. Does the above command do the checksumming on both computers, or only on the sending one? Is it correct otherwise?

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • tricks for speeding up tar while tarring up a huge directory of little files?

    - by Trevor Harrison
    I'm trying to tar up a directory that has about 3M tiny files in it. Tar is chugging along, but I'm thinking its going to take longer than I can wait. I'm wondering if telling tar to not store metadata (owner, group, perms) would reduce the churn on reading and re-reading this huge directory and maybe speed things up, and if there is a tar switch that does this. My initial perusal of the man page only gets me something like --no-xattrs, which looks like a start, but I was hoping someone had some specific knowledge.

    Read the article

  • A little confusion about AJAX and inserting into DOM..

    - by Gnee
    I have this working great, but I'd like a deeper understanding of what is actually going on behind the scenes. I am using Jquery's Ajax method to pull 5 blog posts (returning only the title and first photo). A PHP script grabs the blog posts' title and first photo and sticks it in an array and sends it back to my browser as JSON. Upon receiving the JSON object, Jquery grabs the first member of the JSON object and displays it's title and photo. In a gallery I made, using buttons – the user can iterate the 1-5 posts. So the actual AJAX call happens right away, and only once. I am basically using this kind of setup: $('my_div').html(json_obj[i]) and each click does a i++. So jquery is plucking these blog posts from my computers memory, my web browsers cache, or some kind of cache in the Javascript engine? One of the things it's returning is a pretty gnarly animated gif. I just wonder if it constantly running in the background (but not visible), stealing processing cycles...etc. Or Javascript just inserting (say a flash movie) into the DOM, but before hand does nothing but take up a little memory (no processing). Anyway, I'm just curious. If someone is a guru on this, I'd love to hear your take. THanks!!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >