Search Results

Search found 65851 results on 2635 pages for 'simple php blog'.

Page 1073/2635 | < Previous Page | 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080  | Next Page >

  • Problems with zend-tool reporting that providers are not valid

    - by Mario
    I have recently setup XAMPP 1.7.3 and ZendFramework 1.10.4 on a new computer and many of the commands that I normally use now fail. Here are the steps I used to setup and test ZF. First I added the ZF library folder (C:\xampp\php\ZendFramework-1.10.4\library) to the include path in php.ini. Then I added the ZF bin folder (C:\xampp\php\ZendFramework-1.10.4\bin) to my Path system variable. To test that everything is configured correctly I ran the command "zf show version" from the command line. The result is "Zend Framework Version: 1.9.6". Immediately something appears to be wrong. The file that is downloaded is "ZendFramework-1.10.4.zip" and the reported version is 1.9.6. I have re-downloaded the latest version (1.10.4) and removed old copy. Still the incorrect version number problem persisted. Having done some research there is a bug in the ZF knowledgebase that version 1.10.3 reports a wrong version number. So that may explain the version number problem. Moving forward I tried to run some zf-tool commands and certain commands reports that the action or provider is not valid. Example: C:\xampp\htdocs>zf create project test Creating project at C:/xampp/htdocs/test C:\xampp\htdocs>cd test C:\xampp\htdocs\test>zf create controller Test Creating a controller at C:\xampp\htdocs\test/application/controllers/TestController.php ... Updating project profile 'C:\xampp\htdocs\test/.zfproject.xml' C:\xampp\htdocs\test>zf create action test Test Creating an action named test inside controller at C:\xampp\htdocs\test/application/controllers/TestController.php ... Updating project profile 'C:\xampp\htdocs\test/.zfproject.xml' C:\xampp\htdocs\test>zf enable layout An Error Has Occurred Action 'enable' is not a valid action. ... C:\xampp\htdocs\test>zf create form Test An Error Has Occurred Provider 'form' is not a valid provider. ... Can any one provide insight into these errors and how to correct them?

    Read the article

  • SQL Relay - G is for GO

    - by fatherjack
    At the SQL Relay event last week all the UK user group leaders did a combined session - The A to Z of SQL - where we all took two letters of the alphabet and gave a 2 minute (it was strictly timed) talk on something SQL related beginning with those letters. It was quite a riot working through 26 different talks in an hour with 25 speaker handovers and the associated switches between SSMS and the slide deck. As a speaker I thoroughly enjoyed it and i hoe we informed as much as  we entertained the...(read more)

    Read the article

  • 5 Reasons why I hate WPF

    - by Richard Mitchell
    I decided to use writing a new tool as a way to learn WPF and MVVM and I thought I'd write down a few of my problems as a way of cathartic release. I decided to read a book before attempting WPF for the first time as I've heard others complain about the steep learning curve. I chose the rather excellent "WPF 4 Unleashed" by Adam Nathan to read through and "Pro WPF in C# 2010" by Matthew MacDonald as a reference whilst I programmed. 1 - Poor editing support for XAML The first thing I think any...(read more)

    Read the article

  • Who Tests the Tester?

    It is scarcely surprising that it can take up to five years to release a new version of SQL Server when one understands the extent of the effort required to test it. When enterprises depend on the reliability of an application or tool such as SQL Backup, the contribution of the tester is of paramount importance. It is an interesting and enjoyable role as well, as Andrew Clarke found out by chatting to testers at Red Gate.

    Read the article

  • Avoiding Flicker with JQuery Tabs

    - by Damon
    I am a huge fan of JQuery because it seems like every time I want to do something it has a plugin that already does it.  Adding a tabbed interface to a web page was always quite an annoyance, but JQuery UI offers a pretty descent tabs solution (click here to see it).  If you read through the documentation, you'll find that you can create a tabbed interface by calling the tabs() method on an element containing an unordered list.  The only problem that I've experienced with the method is that on slower machines you can see the unordered list render out in its original state before being updated into the final tabbed interface.  A quick way to fix that issues is to set the CSS display property of the element to none, then call the show() method directly after calling the tabs() method.  This keeps the element completely hidden while JQuery sets up the tabs interface and eliminates the flicker. <SCRIPT type="text/javascript">      $(function()      {           $("#tabs").tabs();           $("#tabs").show();      }); </SCRIPT> <div id="tabs" style="display:none;">     <ul>         <li><a href="#tabs-1">First Tab</a></li>         <li><a href="#tabs-2">Second Tab</a></li>         <li><a href="#tabs-3">Third Tab</a></li>     </ul>     ... </div>

    Read the article

  • jQuery .load() wait till content is loaded

    - by user1785870
    How to prevent jQuery $('body').load('something.php'); from changing any DOM till all the content from something.php (including images,js) is fully loaded -Lets say some actual content is: Hello world And something.php content is: image that loads for 10 seconds 20 js plugins After firing .load() function nothing should happen, till images an js files are fully loaded, and THEN instantly change the content. some preloader may appear, but its not subject of question.

    Read the article

  • Creating an ITemplate from a String

    - by Damon
    I do a lot of work with control templates, and one of the pieces of functionality that I've always wanted is the ability to build a ITemplate from a string.  Throughout the years, the topic has come up from time to time, and I never really found anything about how to do it. though I have run across a number of postings from people who are also wanting the same capability.  As I was messing around with things the other day, I stumbled on how to make it work and I feel really foolish for not figuring it out sooner. ITemplate is an interface that exposes a single method named InstantiateIn.  I've been searching for years for some magical .NET framework component that would take a string and convert it into an ITemplate, when all along I could just build my own.  Here's the code: /// <summary> ///   Allows string-based ITempalte implementations /// </summary> public class StringTemplate : ITemplate {     #region Constructor(s)     ////////////////////////////////////////////////////////////////////////////////////////////     /// <summary>     ///   Constructor     /// </summary>     /// <param name="template">String based version of the control template.</param>     public StringTemplate(string template)     {         Template = template;     }     /// <summary>     ///   Constructor     /// </summary>     /// <param name="template">String based version of the control template.</param>     /// <param name="copyToContainer">True to copy intermediate container contents to the instantiation container, False to leave the intermediate container in place.</param>     public StringTemplate(string template, bool copyToContainer)     {         Template = template;         CopyToContainer = copyToContainer;     }     ////////////////////////////////////////////////////////////////////////////////////////////     #endregion     #region Properties     ////////////////////////////////////////////////////////////////////////////////////////////     /// <summary>     ///   String based template     /// </summary>     public string Template     {         get;         set;     }     /// <summary>     ///   When a StringTemplate is instantiated it is created inside an intermediate control     ///   due to limitations of the .NET Framework.  Specifying True for the CopyToContainer     ///   property copies all the controls from the intermediate container into instantiation     ///   container passed to the InstantiateIn method.     /// </summary>     public bool CopyToContainer     {         get;         set;     }     ////////////////////////////////////////////////////////////////////////////////////////////     #endregion     #region ITemplate Members     ////////////////////////////////////////////////////////////////////////////////////////////     /// <summary>     ///   Creates the template in the specified control.     /// </summary>     /// <param name="container">Control in which to make the template</param>     public void InstantiateIn(Control container)     {         Control tempContainer = container.Page.ParseControl(Template);         if (CopyToContainer)         {             for (int i = tempContainer.Controls.Count - 1; i >= 0; i--)             {                 Control tempControl = tempContainer.Controls[i];                 tempContainer.Controls.RemoveAt(i);                 container.Controls.AddAt(0, tempControl);             }                         }         else         {             container.Controls.Add(tempContainer);         }     }     ////////////////////////////////////////////////////////////////////////////////////////////     #endregion } //class Converting a string into a user control is fairly easy using the ParseControl method from a Page object.  Fortunately, the container passed into the InstantiateIn method has a Page property.  One caveat, however, is that the Page property only has a reference to a Page if the container is located ON the page.  If you run into this problem, you may have to find a creative way to get the Page reference (you can add it to the constructor, store it in the request context, etc).  Another issue that I ran into is that the ParseControl creates a new control, parses the string template, places any controls defined in the template onto the new control it created, and returns that new control with the template on it.  You cannot pass in your own container. Adding this directly to the container provided as a parameter in the InstantiateIn means that you end up with an additional "level" in the control hierarchy.  To avoid this, I added code in that removes each control from the intermediate container and places it into the actual container.  I am not, however, sure about the performance penalty associated with moving a bunch of control from one place to another, nor am I completely sure if doing such a move completely screws something up if you have a code behind, etc.  It seems to work when it's just a template, but my testing was ever-so-slightly shy of thorough when it comes to other crazy scenarios.  As a catch-all, I added a Boolean property called CopyToContainer that allows you to turn the copying on or off depending on your desires and needs. Technorati Tags: .NET,ASP.NET,ITemplate,Development,C#,Custom Controls,Server Controls

    Read the article

  • Getting started with Windows Azure

    - by jonhobbs
    Hi All, I'm going to sound like a complete newbie here but here goes... I've just signed up for a Windows Azure account and was hoping to get a simple hello world aspx page up and running in a browser to see how it all works but I can't seem to find a simple guide to getting a very simple web application running. I've got as far as setting up a "service" and going onto the "deploy" page but it's asking me upload an "application package". I've looked on MSDN but there aren't any simple guides, just reams of documentation talking about "roles" and "development fabric". For somebody that is proficient in HTML/CSS and knows a little abit about asp.net it may just as well be in another language. So, does anybody know how to upload a simple aspx page and then access it in a browser? Jon

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Single Item Recovery

    One of the more obscure parts of Exchange Server is the Dumpster. This allows the recovery of one or more deleted emails, in much the same way as the recycle bin in Windows, even if the user has purged them. Although it is seen as an alternative to backup, or as a means to document retention, it really provides a separate function, undoing an accidental deletion. Elie explains how to configure deleted retention items and how to recover a purged item.

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • UTF8 encoding not working when using ajax.

    - by Ronedog
    I recently changed some of my pages to be displayed via ajax and I am having some confusion as to why the utf8 encoding is now displaying a question mark inside of a box, whereas before it wasn't. Fore example. The oringal page was index.php. charset was explicitly set to utf8 and is in the <head>. I then used php to query the database Heres is the original index.php page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title>Title here</title> </head> <body class='body_bgcolor' > <div id="main_container"> <?php Data displayed via php was simply a select statement that output the HTML. ?> </div> However, when I made the change to add a menu that populated the "main_container" via ajax all the utf8 encoding stopped working. Here's the new code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title>Title here</title> </head> <body class='body_bgcolor' > <a href="#" onclick="display_html('about_us');"> About Us </a> <div id="main_container"></div> The "display_html()" function calls the javascript page which uses jquery ajax call to retrieve the html stored inside a php page, then places the html inside the div with an id of "main_container". I'm setting the charset in jquery to be utf8 like: $.ajax({ async: false, type: "GET", url: url, contentType: "charset=utf-8", success: function(data) { $("#main_container").html(data); } }); What am I doing wrong?

    Read the article

  • Problem with dropdown and codeigniter

    - by JEagle
    Hi, i'm using 2 dropdowns where the second gets populated from the first choice. My problem is that i'm not getting the value from the first dropdown. What i get is [object Object]. Here's the javascript and php code: Thanks. Javascript: function getState(){ $("#selectestate").bind("change",function(){ $("#selectcity").load("results/ajaxcity", {stat: $(this).val()} ); //This is where the problem is alert({stat: $(this).val()});//Shows [object Object] }); return false; } PHP: $curstat=$this -> input -> post('state'); //current selected state in first dropdown <tr> <?php $js = 'id="selectstate" onChange="getState();"';?> <td><h3> State: </h3></td> <td id="selectestate"><?php echo form_dropdown('state', $stat, $curstat, $js);?></td> </tr> <tr> <td><h3> City: </h3></td> <td id="selectcity"><?php echo form_dropdown('city', $cit);?></td> </tr>

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • SQLBeat Episode 11 – Ted the Fred Krueger Halloween SQL

    - by SQLBeat
    In this episode of the SQLBeat Podcast I speak conversationally (Ok I will just say I converse) with Ted Krueger about Elm Street, where he works as a DBA who stores nightmares in SQL Server database tables. The joke about it being BLOB storage is only one of several that may scare you away from this Halloween Special. If you like listening to two SQL guys talking about the bands they used to be in, rainbow trout and video games, come on in. Bwaaaa Haaaa Haaa…..Ok I will stop. Download the MP3

    Read the article

  • OCS Disaster Recovery – Part 1

    Office Communications Server stores its data in a variety of locations, which can make backing it up and disaster recovery a slightly tricky business. Thankfully, Johan Velduis is here to tell which databases and fileshares we need to worry about, and give us peace of mind.

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Using Solr and Zends Lucene port together...

    - by thebluefox
    Afternoon chaps, After my adventures with Zend-Lucene-Search, and discovering it isn't all its cracked up to be when indexing large datasets, I've turned to Solr (thanks to Bill Karwin for that :) ) I've got Solr indexing the db far far quicker now, taking just over 8 minutes to index a table of just over 1.7million rows - which I'm very pleased with. However, when I come to try and search the index with the Zend port, I run into the following error; Fatal error: Uncaught exception 'Zend_Search_Lucene_Exception' with message 'Unsupported segments file format' in /var/www/Zend/Search/Lucene.php:407 Stack trace: #0 /var/www/Zend/Search/Lucene.php(555): Zend_Search_Lucene-_readSegmentsFile() #1 /var/www/z_search.php(12): Zend_Search_Lucene-__construct('tmp/feeds_index') #2 {main} thrown in /var/www/Zend/Search/Lucene.php on line 407 I've tried to have a search around but can't seem to find anything about this problem, everyone just seems to be able to get them to work? Any help as always much appreciated :) Thanks, Tom

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • Red Gate's on the road in 2012 - Will you catch us?

    - by RedAndTheCommunity
    Annabel Bradford, our Communities and Events Manager, tells all about her experience of our 1st SQL Saturday of the year. The first stop this year was SQL Saturday #104 Colorado Springs, back in early January. I made the trip across from the UK just for this SQL Saturday event, and I'm so glad I did. I picked up Max from Red Gate's Pasadena office and we flew into Colorado Springs airport late on Friday evening to be greeted by freezing temperatures, which was quite a shock after the California sunshine. Rising before the sun, we arrived at Mr Biggs, the venue for the event, in the darkness. It was great to see so many smiling attendees so bright and early on a Saturday morning. Everyone was eager to learn more about SQL Server, and hundreds of people came and chatted with us at the table, saw demos and learnt more about Red Gate tools. The event highlights for the attendees were definitely the unlimited lazer quest, bowling and pool available during the break times. For Max, Grant Fritchey and I on the Red Gate table, the highlights have to be meeting customers and getting the opportunity to meet attendees who'd heard of, but wanted to know more about, Red Gate. We were delighted to hear lots of valuable feedback that we took back to share with the team. As a thank you for sharing insights about their work lives and how they use SQL Server and Red Gate tools, attendees are able to take away Red Gate SQL Server books. We aim to have a range of titles available when we exhibit, so that attendees can choose a book that's going to be most interesting to them, and that they can use as a reference back at the office. Every time I meet a Red Gate user or a member of the SQL community, I'm always overwhelmed by the enthusiasm they have for their industry. Everyone who gives up their time to learn more about their job should be rewarded, and at Red Gate we like to do just that. Red Gate has long supported the SQL community through sponsorship to facilitate user group meetings and community events, but it's only though face-to-face contact that we really get a chance to see the impact of our support. I hope we'll have the chance to see you on the road at some point this year. We'll be at a range of events, including free SQL Saturdays, one day free events 'the Red Gate way', two-day Rallys, and full-week conferences. Next stop is SQL Saturday #109 Silicon Valley on March 3rd where you'll meet Jeff and Arneh, two of our US-based SQL team members. Be sure to ask them any questions you've got about the Red Gate tools, as these guys will be delighted to hear your questions, show you the options, and will make a note of your feedback to send through to the development team. Until the next time. Happy learning! Annabel                         Grant, Max and Annabel at SQL Saturday #104 Colorado Springs

    Read the article

  • Red Gate and the Community

    - by RedAndTheCommunity
    I was lucky enough to join the Communities team in April 2011, having worked in the equally awesome (but more number-crunchy), Finance team at Red Gate for about four years before that. Being totally passionate about Red Gate, and easily excitable, it seems like the perfect place to be. Not only do I get to talk to people who love Red Gate every day, I get to think up new ways to make them love us even more. Red Gate sponsored 178 SQL Server and .NET events and user group meetings in 2011. They ranged from SQL Saturdays and Code Camps to 10 person user group meetings, from California to Krakow. We've given away cash, software, Kindles, and of course swag. The Marketing Cupboard is like a wonderland of Red Gate goodies; it is guarded day and night to make sure the greedy Red Gaters don't pilfer the treasure inside. There are Red Gate yo-yos, books, pens, ice scrapers and, over the Holidays, there were some special bears. We had to double the patrols guarding the cupboard to protect them. You can see why: Over the Holidays, we gave funding and special Holiday swag (including the adorable bears), to 10 lucky user groups, who held Christmas parties - doing everything from theatre trips to going to shooting ranges. What next? So, what about this year? In 2012 our main aim is to be out there meeting more of you. So get ready to see an army of geeks in red t-shirts at your next event! We also want to do more fun things like our Christmas party giveaway. What cool ideas do you have for sponsorship in 2012? An Easter Egg hunt with SQL server clues? A coding competition? A duelling contest with a license of SQL Toolbelt for the winner? Let me know.

    Read the article

< Previous Page | 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080  | Next Page >