Search Results

Search found 47615 results on 1905 pages for 'make it useful keep it simple'.

Page 178/1905 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Essential Tools for the WPF Novice

    When Michael sets out to do something, there are no half-measures; So when he set out to learn WPF, we all stand to benefit from the thorough research that he put into the task. He wondered what utility applications could assist programming in WPF. Here are the fruits of all his work.

    Read the article

  • Web.NET event coming in October

    - by Chris Massey
    If you’re a web developer in Europe (or would like an excuse to travel to Europe), you should definitely take a look at the Web.NET event coming in October. It’s being organized by two Italian web maestros (Simone Chiaretta and Ugo Lattanzi) and the session list looks fantastic. The event site pretty much speaks for itself, but here’s a quick version: It’s a free one-day event on October 20th, with a huge variety of great sessions by great speakers, all 100% focused on web development. There’s a pizza-fuelled hackathon in the evening; thrills, spills and hot new skills. It’s a great chance to network with the local (in relative terms) web development community. It’s free (although all donations are very greatly appreciated). It’s in Milan, darling. Here’s what you need to do: Go and register on www.webnetconf.eu, and vote on which sessions you think look the most interesting. I know this will be a difficult process – it’s *very* hard to choose – but persevere! Grab your place when the free tickets become available early next month (places are limited). Come to Milan in October, learn some new skills, meet some great people, and maybe build something awesome if you feel like staying up late. I’ll be there, and hopefully I’ll see you on the day.

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • A weekend with the Samsung Galaxy Tab

    - by Richard Mitchell
    This weekend I took one of the Samsung Galaxy Tabs we have lying around the office here home to see how I got on with it as I've been thinking of buying one. Initial impressions The look and feel of the Tab is quite nice. It's a lot smaller than an iPad but that is no bad thing as I imagine they are targeted at different markets. The Tab fits into my inside coat pocket nicely and doesn't feel like it's weighing me down too much. Connecting up the Tab to the network at work was fine, typing in...(read more)

    Read the article

  • Implementing Cluster Continuous Replication, Part 3

    Cluster continuous replication (CCR) uses log shipping and failover to provide a more resilient email system with faster recovery. Once it is installed, a clustered server requires different management routines. These are done either with a GUI tool, The Failover Cluster Management Console, or the Exchange Management Shell. You can use Powershell as well for some tasks. Confused? Not for long, since Brien Posey is once more here to help.

    Read the article

  • ASP.NET 4.0 Features

    ASP.NET v4 is released with Visual studio 2010. Web developers are presented with a bewildering range of new features and so Ludmal De Silva has described what he considers to be the most important new features in ASP.NET V4

    Read the article

  • From 0 to MVP in 4 weeks

    - by fatherjack
    You may know from my previous posts that I have just started a local SQL Server User Group. 3 weeks ago there was no such group within 100 miles and then we had a meeting. Now, in eight days time, there is going to be a second meeting and I am very excited to be able to say that we will be having an MVP speaker for one of the sessions. Aaron Nelson (Blog|Twitter) made an incredibly generous offer of speaking for us on using PowerShell with SQL Server and I didn't hang around before I said "Yes...(read more)

    Read the article

  • error with gtkmm 3 in ubuntu 12.04

    - by Grohiik
    i install libgtkmm-3.0-dev in ubuntu 12.04 and i try to learn and write program with c++ and gtkmm 3 i go to this link "http://developer.gnome.org/gtkmm-tutorial/unstable/sec-basics-simple-example.html.en" and try to compile simple example program : #include <gtkmm.h> int main(int argc, char *argv[]) { Glib::RefPtr<Gtk::Application> app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Gtk::ApplicationWindow window; return app->run(window); } my file name is "basic.cc" and i open terminal and type following command to compile: g++ basic.cc -o basic `pkg-config gtkmm-3.0 --cflags --libs` compile completed without any error but when i try to run program with type ./basic in terminal i get following error : ~$ ./simple ./simple: symbol lookup error: ./simple: undefined symbol:_ZN3Gtk11Application6createERiRPPcRKN4Glib7ustringEN3Gio16ApplicationFlagsE ~$ how can i solve this problem ? i can cimpile any gtkmm 2.4 code with this command : " g++ basic.cc -o basic pkg-config gtkmm-3.0 --cflags --libs " and this command : " g++ basic.cc -o basic pkg-config gtkmm-2.4 --cflags --libs " thanks

    Read the article

  • The Art of Dealing with People

    Technical people generally don't easily adapt to being good salespeople. When a technical person takes on a customer-facing role as a support engineer, there are a whole lot of new skills required. Dr Petrova relates how the experience of a change in job gave her a new respect for the skills of sales and marketing.

    Read the article

  • Antenna Aligner Part 8: It's Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . "Waiting for review" .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I'm just expecting too much, or perhaps I've configured the page wrong. The new facebook timeline layout is just confusing, and I'm not sure it's all public, I'll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions... Oh and watch out for the Android version coming soon!

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Protect and Improve your Software with SmartAssembly 5

    - by Bart Read
    SmartAssembly 5 has been released. You can download a 14-day fully-functional free trial from: http://www.red-gate.com/products/smartassembly/index.htm This is the first major release since Red Gate acquired the tool last year, and our focus has mainly been on improving the quality of an already great tool. We've also simplified the licensing model so that there are now only three editions: Standard - bullet-proof protection at a bargain price, Pro - includes the SDK & custom web server...(read more)

    Read the article

  • I'm blogging again, and about time too

    - by fatherjack
    No, seriously, this one is about time. I recently had an issue in a work database where a query was giving random results, sometimes the query would return a row and other times it wouldn't. There was quite a bit of work distilling the query down to find the reason for this and I'll try to explain by demonstrating what was happening by using some sample data in a table with rather a contrived use case. Let's assume we have a table that is designed to have a start and end date for something, maybe...(read more)

    Read the article

  • Web.NET is Closing Fast

    - by Chris Massey
    The voting for sessions has now closed, and sadly only half of the potential sessions could make it through. On the plus side, the sessions that floated to the top look great and, with the votes in, Simone and Ugo have moved right along and created a draft agenda to whet our appetites. Take a look, and let them know what you think. I’d also strongly recommend that you get ready to grab your tickets when they become available next week (specifically, September 18th), as places are going to be snapped up fast. In case you need a reminder as to why Web.NET is worth your time: Complete focus on web development Awesome sessions All-night hackathon Free (although I urge you to make a donation to help Simone and Ugo create the best possible event) Put October 20th in your calendar, and start packing. I’ve already booked my flights, and am perusing the list of hotels while I eat my lunch. Bonus Material There will be a full day of RavenDB training on Monday the 22nd of October, run by Ayende himself, and attending Web.NET will get you a 30% discount on the cost of the session.

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • Smartassembly 5: it lives! Early Access builds now available

    - by Bart Read
    I'm pleased to announce that, late last week, we put out the first early access build for Smartassembly 5, Red Gate's fantastic code protection and error reporting tool, which we acquired last September. You can download it via: http://www.red-gate.com/messageboard/viewforum.php?f=116 It's obviously pretty early days, so please do not try to use this to protect a production application, but we've already done a lot of work in some key areas: We're simplifying and streamlining the licensing model (you won't see this yet, but a lot of the work on this has already been done). We've improved usability of the product, with a better menu, reordering of project settings, and better defaults. We've also fixed a load of bugs, which I'll let Alex blog about in more detail. On a slightly more trivial level, the curly braces are also no more. Over the coming weeks, we'll be adding more improvements, and starting usability tests. If you're interested in getting involved in the latter, please drop an email to [email protected].

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Migrating from OCS 2007 R2 to Lync: Part 2

    In the story so far, Johan has described how to check that the migration from your OCS to Lync is supported and how to determine the requirements for the new installation This was followed by a walk-through of the preparation the Active Directory and installation of the first Lync Front End Server with a Mediation Server co-located. Now Johan tackles the merging the OCS configuration, and connection to the outsode world, followed by testing, performing and then validating the migration.

    Read the article

  • Getting baseline and performance stats - the easy way.

    - by fatherjack
    OK, pretty much any DBA worth their salt has read Brent Ozar's (Blog | Twitter) blog about getting a baseline of your server's performance counters and then getting the same counters at regular intervals afterwards so that you can track performance trends and evidence how you are making your servers faster or cope with extra load without costing your boss any money for hardware upgrades. No? well, go read it now. I can wait a while as there is a great video there too...http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/,...(read more)

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • Got that Friday feeling?

    - by Rebecca Amos
    Saturday is just around the corner, and we’re all starting to wrap up for the weekend. If you’re the DBA that ‘Friday feeling’ might be as much about checking and preparing your SQL Servers for the next two days, as about looking forward to spending time with friends and family. Whether you’re double-checking your disaster recovery strategy, or know that it’s your turn to be on-call this weekend, it’s likely you’re preparing for the worst, just in case. The fact that you’re making these checks, and caring about both your servers and your users, means that you might be an exceptional DBA. You’re already putting in that extra effort to make other people’s lives easier. So why not take some time for your professional development and enter the Exceptional DBA Awards? If you’re looking for some inspiration for your entry, download our Judges’ Top Tips poster for advice on what the judges are looking for from this year’s entrants. Not only will you be boosting your professional development, but you could win full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and a copy of Red Gate’s SQL DBA Bundle. So take some time out for yourself this weekend and get started on your entry: www.exceptionaldba.com

    Read the article

  • (resolved) empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null UPDATED: I've found a problem, it was about cross-domain requests. I knew that there are restrictions, but thought they're relaxed for local filesystem and local servers. (and expected more descriptive error message, anyway) Thanks everybody!

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >