Search Results

Search found 62701 results on 2509 pages for 'sql function'.

Page 420/2509 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • How do i recreate a trigger in SQL Server?

    - by acidzombie24
    i use the statement drop trigger if exist TRIGGER in sqlite but sql server doesnt like the if statement. (i guess exist is the offending word). I do this right next to my create trigger statement because i want to drop older triggers with the same name so i can replace it with this new one. How do i do this in SQL server?

    Read the article

  • Looking for a webhost to support SSRS Hosting with SQL Azure

    - by Adrian Grigore
    Since SQL Azure does not currently support SSRS, the only possible workaround is to host my own SSRS server and have it point to my SQL Azure instance for data retrieval. Now, for me it would be total overkill to rent a dedicated server with SQL server on it just for hosting SSRS. Are there any (shared) web hosters that offer SSRS hosting with third party SQL servers? I've already asked discountasp.net, but they don't allow this. Thanks, Adrian

    Read the article

  • Licensing for SQL Server Merge Replication via Web Syncronization

    - by user43330
    I am planning to implement a Sales force automation software where there will be 50 PDA devices having sql server CE 3.5 connecting via web sync merge replication to central SQL Server 2005 main Database via a IIS server. 1) IIS Server Win 2003 Server 2) SQL Server 2005 Standard 3) SQL CE 3.5 having in 50 PDA Devices. How many licenses are required for each Servers ? What is the licencing model

    Read the article

  • sql server doesn't exist or access denied

    - by kareemsaad
    I had Win7 in my pc and I installed 2vmware .One of them (VM) had Win XP and I installed on It SQL 2000 and visual studio 2008.and other I installed Win XP and I installed on it SQL 2005 and visual studio 2008. and when I run SQL2000 this error appear sql server doesn't exist or access denied Pleas verify sql server is running ........

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • How does the compile choose which template function to call?

    - by aCuria
    Regarding the below code, how does the compiler choose which template function to call? If the const T& function is omitted, the T& function is always called. If the T& function is omitted, the const T& function is always called. If both are included, the results are as below. #include <iostream> #include <typeinfo> template <typename T> void function(const T &t) { std::cout << "function<" << typeid(T).name() << ">(const T&) called with t = " << t << std::endl; } template <typename T> void function(T &t) { std::cout << "function<" << typeid(T).name() << ">(T&) called with t = " << t << std::endl; } int main() { int i1 = 57; const int i2 = -6; int *pi1 = &i1; int *const pi3 = &i1; const int *pi2 = &i2; const int *const pi4 = &i2; function(pi1); ///just a normal pointer -> T& function(pi2); ///cannot change what we point to -> T& function(pi3); ///cannot change where we point -> const T& function(pi4); ///cannot change everything -> const T& return 0; } /* g++ output: function<Pi>(T&) called with t = 0x22cd24 function<PKi>(T&) called with t = 0x22cd20 function<Pi>(const T&) called with t = 0x22cd24 function<PKi>(const T&) called with t = 0x22cd20 */ /* bcc32 output: function<int *>(T&) called with t = 0012FF50 function<const int *>(T&) called with t = 0012FF4C function<int *>(const T&) called with t = 0012FF50 function<const int *>(const T&) called with t = 0012FF4C */ /* cl output: function<int *>(T&) called with t = 0012FF34 function<int const *>(T&) called with t = 0012FF28 function<int *>(const T&) called with t = 0012FF34 function<int const *>(const T&) called with t = 0012FF28 */

    Read the article

  • Load external script dynamic, and access it, so I can trigger its domready function

    - by Didier
    I am trying to load the Zopim chat client. Normally it is included with a document write action. This causes it to be loaded before the domReady function is triggered, as it needs this to start itself. I want to load it later, and this works by using prototype (framework determined by Magento) to create a new script element and attaching it to the head. The script is loaded perfectly, but the domReady doesn't fire, so the script is never started. The script is a nameless class, by this I mean that all its functions are encapsulated in {} UPDATE: Sorry, I got it wrong, it is as self invoking function, the same syntax as the first answer suggest. (function C(){ })(); This function call when run sets up listening events for the domReady event under various browsers and then waits. When the domready event fires, it calls a function (that is within the self-invoked function) that starts everything. What I need, is a way to access this function somehow. END UPDATE Within that is a function named C. How can I call this function directly? Or put another way, how can I start an external javascript file that depends on domready going off, when that event doesn't happen? Can I load an external javascript file into a variable, so I can name the class? Can I access the nameless class {} somehow maybe via the script tag? Is there a way to alter the external file/javascript so I can have it look for another event, one that I can trigger? About the only solution I can think of at the moment is to create a iframe and load the script in that.

    Read the article

  • How to call Ajax function recursively.

    - by MAS1
    Hi guys, I want to know how to call Ajax function Recursively. My ajax code is like this, <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> var step='1'; $.ajax ( { url: 'test1.php', data: {step: step}, success: function(output) { step=eval(output); alert(step); } } ); </script> </head> </html> And php code is like this <?php function writeName() { echo "step=2"; } function ReadName() { echo "In Read Name Function"; } if(isset($_REQUEST['step']) && !empty($_REQUEST['step'])) { $step = $_REQUEST['step']; switch($step) { case '1' : writeName(); break; case '2' : ReadName(); break; default : echo "Empty String"; } } ?> first time this function is get called with value of step variable equal to 1 and Function Write name modified it as 2. Now i want to call this Ajax function with Step variable value equal to 2. So that 2nd php function gets called.

    Read the article

  • jQuery aspx error function always called, even with apparently valid return data

    - by M Katz
    I am making an ajax call using jQuery (jquery-1.5.2.min.js). The server receives the call. fiddler shows the response text coming back apparently correct. But back in javascript my error: function is called instead of my success: function, and the data of the parameters to the error function don't give much clue as to the problem. Here's the function where I make the initial call: function SelectCBGAtClickedPoint() { $.ajax( { type: "GET", dataType: "text", url: "http://localhost/ajax/SelectCBGAtPoint/1/2", success: function( msg ) { alert( "success: " + msg ); }, error: function( jqXHR, textStatus, errorThrown ) { alert( "error: " + jqXHR + textStatus + errorThrown ); } } ); } Here's the function where I handle the call in my cherrypy server code: def ajax( ajax, *args ): with lock: print "ajax request received: " + str( args ) cherrypy.response.headers[ "Content-Type" ] = "application/text" return "{ x: 0 }" Cherrypy is an odd beast, and I was thinking the problem must lie there. But as I say, I see both the query go out and the response come back in Fiddler. Here is what Fiddler shows as the raw view of the response: HTTP/1.1 200 OK Date: Mon, 11 Apr 2011 17:49:25 GMT Content-Length: 8 Content-Type: application/text Server: CherryPy/3.2.0 { x: 0 } Looks good, but then back in javascript, I get into the error: function, with the following values for the parameters (as shown in firebug): errorThrown = "" jqXHR = Object { readyState=0, status=0, statusText="error"} statusText = "error" I don't know where that word "error" is coming from. That string does not appear anywhere in my cherrypy server code. Note that even though I'm returning a JSON string I've set the send and receive types to "text" for now, just to simplify in order to isolate the problem. Any ideas why I'm getting this "error" reply, even when errorThrown is empty? Could it be that I haven't properly "initialized" either jQuery or jQuery.ajax?

    Read the article

  • return value (not a reference) from the function, bound to a const reference in the calling function

    - by brainydexter
    "If you return a value (not a reference) from the function, then bind it to a const reference in the calling function, its lifetime would be extended to the scope of the calling function." So: const BoundingBox Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } Returns a value of type const BoundingBox from function GetBoundingBox() Called function: (From within function Update() the following is called:) variant I: (Bind it to a const reference) const BoundingBox& l_Bbox = l_pPlayer->GetBoundingBox(); variant II: (Bind it to a const copy) const BoundingBox l_Bbox = l_pPlayer->GetBoundingBox(); Both work fine and I don't see the l_Bbox object going out of scope. (Though, I understand in variant one, the copy constructor is not called and thus is slightly better than variant II). Also, for comparison, I made the following changes. BoundingBox Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } with Variants: I BoundingBox& l_Bbox = l_pPlayer->GetBoundingBox(); and II: BoundingBox l_Bbox = l_pPlayer->GetBoundingBox(); The objet l_Bbox still does not out scope. So, I don't see how "bind it to a const reference in the calling function, its lifetime would be extended to the scope of the calling function", really extends the lifetime of the object to the scope of the calling function ? Am I missing something trivial here..please explain .. Thanks a lot

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • Can't add object to Array in jQuery's getJSON data function (scope issue)

    - by seo20
    I have a person object and wants to store it into a global ArrayCollection I have made. Works great in normal scope: var s = new ArrayCollection(); s.add(new person("Knud", "Mikkelsen", 35)); The problem is when I want to add people inside my jQuery function "mainFunction". I can't seem to get it right. I know it's something to do with scope and I have to wrap something in functions like in my ArrayCollection. Please help me - thanks a lot. function ArrayCollection() { var myArray = new Array; return { empty: function () { myArray.splice(0, myArray.length); }, add: function (myElement) { myArray.push(myElement); }, getAll: function () { return myArray; } } } function person(firstName, lastName, age) { this.firstName = firstName; this.lastName = lastName; this.age = parseInt(parseFloat(age)); } function mainFunction() { //.... var s = new ArrayCollection(); s.add(new person("Knud", "Mikkelsen", 35)); $.getJSON(url, function (data) { for (var x = 0; x < data.length; x++) { var myPerson = new person(data[x].FirstName.toString(), data[x].LastName.toString(), data[x].Age.toString()); s.add(myPerson); } }); alert(drawArray(s.getAll())); } function drawArray(myArray) { var v = ""; for (var i = 0; i < myArray.length; i++) { v += myArray[i].firstName + " " + myArray[i].lastName + " (" + myArray[i].age + ")\n"; } return v; }

    Read the article

  • Setting javascript prototype function within object class declaration

    - by Tauren
    Normally, I've seen prototype functions declared outside the class definition, like this: function Container(param) { this.member = param; } Container.prototype.stamp = function (string) { return this.member + string; } var container1 = new Container('A'); alert(container1.member); alert(container1.stamp('X')); This code produces two alerts with the values "A" and "AX". I'd like to define the prototype function INSIDE of the class definition. Is there anything wrong with doing something like this? function Container(param) { this.member = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function() { return this.member + string; } } } I was trying this so that I could access a private variable in the class. But I've discovered that if my prototype function references a private var, the value of the private var is always the value that was used when the prototype function was INITIALLY created, not the value in the object instance: Container = function(param) { this.member = param; var privateVar = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function(string) { return privateVar + this.member + string; } } } var container1 = new Container('A'); var container2 = new Container('B'); alert(container1.stamp('X')); alert(container2.stamp('X')); This code produces two alerts with the values "AAX" and "ABX". I was hoping the output would be "AAX" and "BBX". I'm curious why this doesn't work, and if there is some other pattern that I could use instead.

    Read the article

  • Jquery: how to call the function again?

    - by bakazero
    I'm making the script for easy upload many files. I've tried fyneworks plugin, but the file name does not appear in a form that uses tiny_mce. This is the code: $("document").ready(function () { var i = 0; //iteration for Inputted File var max = 5; //max file uploaded createStatus(); //generate status container $('.imageNow').change(function() { if($(this).val()) { var ext = $(this).val().split('.').pop().toLowerCase(); var allow = [ 'gif', 'png', 'jpg', 'jpeg' ]; if (jQuery.inArray(ext, allow) != -1) { //validation true addImgInput($(this).clone()); $(this).addClass('imgOK'); $(this).removeClass('imageNow'); addImgList($(this).val()); removeImgStatus(); i++; } else { addImgStatus(); $(this).remove(); addImgInput(); } } return false; }); }); This is another function that I use: function addImgInput(inputClone) { $(inputClone).prependTo( $('#upload_images') ); //div container for generated InputFileImg }; function addImgList(listingImg) { $('#list_image_file').append('<li>' + listingImg + '</li>' ); //list all images that have been inputted } function createStatus() { $('#upload_images').append('<small class="status"></small>'); //error display container } function addImgStatus() { $('.status').append('* Wrong File Extension'); //error display text } function removeImgStatus() { $('.status').empty(); } Mmm..yeah is't not finished yet, because when I try to generate another Inputfile with imageNow class, the $('.imageNow').change(function() is not going to work anymore. Does anyone can help me? Thanks

    Read the article

  • reactivating or binding a hover function in jquery??

    - by mathiregister
    hi guys, with the following three lines: $( ".thumb" ).bind( "mousedown", function() { $('.thumb').not(this).unbind('mouseenter mouseleave'); }); i'm unbinding this hover-function: $(".thumb").hover( function () { $(this).not('.text, .file, .video, .audio').stop().animate({"height": full}, "fast"); $(this).css('z-index', z); z++; }, function () { $(this).stop().animate({"height": small}, "fast"); } ); i wonder how i can re-bind the exact same hover function again on mouseup? the follwoing three lines arent't working! $( ".thumb" ).bind( "mouseup", function() { $('.thumb').bind('mouseenter mouseleave'); }); to get what i wanna do here's a small explanation. I want to kind of deactivate the hover function for ALL .thumbs-elements when i click on one. So all (but not this) should not have the hover function assigned while i'm clicking on an object. If i release the mouse again, the hover function should work again like before. Is that even possible to do? thank you for your help!

    Read the article

  • Function behaviour on shell(ksh) script

    - by footy
    Here are 2 different versions of a program: this Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; printmsg echo $i Output: # ksh e I printed hello function :) hello function :) 1 and Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; echo $i Output: # ksh e I printed hello function :) 0 The only difference between the above 2 programs is that printmsg is 2times in the above program while printmsg is called once in the below program. My Doubt arises here: To quote Be warned: Functions act almost just like external scripts... except that by default, all variables are SHARED between the same ksh process! If you change a variable name inside a function.... that variable's value will still be changed after you have left the function!! But we can clearly see in the 2nd program's output that the value of i remains unchanged. But we are sure that the function is called as the print statement gets the the output of the function and prints it. So why is the output different in both?

    Read the article

  • javascript check function for a html form

    - by Reteras Remus
    I'm having a problem with a HTML form (name, e-mail, subject, message), I can't validate the form if it was completed correct or not. I want to call a javascript function when the form is submitted and alert if a field wasn't completed correct. My problem is with the submit button, because my boss said that I must use he's own php function to insert a submit button. The php function looke like: function normal_button($text, $width, $param = '', $class = 'button-blue') { $content = '<div class="'.$class.'" style="width:'.$width.'px;" '.$param.'>'.$text.'</div>'; return $content; } and how I call the function: echo normal_button('Send it', '100', 'onclick="document.getElementById(\'emailForm\').submit();" '); I tried to call the js function declaring the onSubmit="return checkForm(this)" in the <form action='email.php' method='post'> HTML form, but I can't "catch" the submit. Also I tried to catch it with jquery, $(document).ready(function() { $('.normal_button').click(function() { }); ); but I can't return false if a field wasn't completed correct. I must use he's PHP function to insert the submit button, but how can I "catch" the fields, how can I validate them? Any suggestions? Thank you !

    Read the article

  • IE7 & IE8 error executing function with ajax

    - by Yahreen
    I am loading an ajax page which executes an HTML5 video player script. The function for the Flash fallback is html5media(); : //Load 1st Case Study $("#splash").live('click', function (e) { $(this).fadeOut('slow', function () { $('#case-studies').load('case-study-1.html', function() { html5media(); //initiate Flash fallback }).fadeIn(); }); e.preventDefault(); }); This initial page load works fine in IE7 & IE8. The problem is once this page is loaded, there are links to 4 more videos which are loaded in again using ajax. I use this function: //Switcher function csClients(url, client) { $("#case-studies").fadeOut('slow', function() { $('#case-studies').load(url, function () { html5media(); //initiate Flash fallback }).fadeIn(); }); } //Page Loader $("#cs-client-list li.client1 a").live('click', function(e) { csClients('case-study-1.html', 'client1'); e.preventDefault(); }); Originally I was using return false; but none of the sub-page Flash videos would load in IE7. When I switched to preventDefault, the videos loaded in IE7 but still not in IE8. I also get a weird error in both IE7 & IE8 with no helpful feedback: Error on Page: Unspecified error. / (Line 49) Code: 0 (Char 5) URI: http://www.mysite.com This is line 49 in my index page: <section id="case-studies" class="main-section"> I have a feeling it has to do with calling html5media(); too many times? At a loss...

    Read the article

  • JavaScript: When does JavaScript evaluate a function, onload or when the function is called?

    - by Benj
    When does JavaScript evaluate a function? Is it on page load or when the function is called? The reason why I ask is because I have the following code: function scriptLoaded() { // one of our scripts finished loading, detect which scripts are available: var jQuery = window.jQuery; var maps = window.google && google.maps; if (maps && !requiresGmaps.called) { requiresGmaps.called = true; requiresGmaps(); } if (jQuery && !requiresJQuery.called) { requiresJQuery.called = true; requiresJQuery(); } if (maps && jQuery && !requiresBothJQueryGmaps.called) { requiresBothJQueryGmaps.called = true; requiresBothJQueryGmaps(); } } // asynch download of script function addScript(url) { var script = document.createElement('script'); script.src = url; // older IE... script.onreadystatechange=function () { if (this.readyState == 'complete') scriptLoaded.call(this); } script.onload=scriptLoaded; document.getElementsByTagName('head')[0].appendChild(script); } addScript('http://google.com/gmaps.js'); addScript('http://jquery.com/jquery.js'); // define some function dependecies function requiresJQuery() { // create JQuery objects } function requiresGmaps() { // create Google Maps object, etc } function requiresBothJQueryGmaps() { ... } What I want to do is perform asynchronous download of my JavaScript and start at the earliest possible time to begin executing those scripts but my code has dependencies on when the scripted have been obviously downloaded and loaded. When I try the code above, it appears that my browser is still attempting to evaluate code within my require* functions even before those functions have been called. Is this correct? Or am I misunderstanding what's wrong with my code?

    Read the article

  • Databases in Source Control

    - by Grant Fritchey
    I’ve been working as a database professional for quite a long time. But originally, I was a developer. And I loved being a developer. There was this constant feedback loop of a job well done, your code compiled and it ran. Every time this happened successfully, you’d check it into source control. These days you have to add another step; the code passed all the tests, unit, line, regression, qa, whatever, then into source control it goes. As a matter of fact, when I first made the jump from developer to DBA/database developer/database professional, source control was the one thing I couldn’t believe was missing from the DBA toolbox. Come to find out, source control was only the beginning of what was missing from your standard DBAs set of skills. Don’t get me wrong. I’m not disrespecting the DBA. They’re focused where they should be, on your production data. But there has to be a method for developing applications that include databases and the database side of that development and deployment process has long been lacking. This lack of development and deployment methodologies is a part of what has given rise to some of the wackier implementations of Object Relational Mapping tools, the NoSQL movement, and some of the other foul cursing that is directed towards databases, DBAs, and database development by application developers. Some of that is well earned. A lot isn’t. But it is a fact that database professionals, in general, do not have as sophisticated a model for managing development and deployment as application developers do. We could charge out and start trying to come up with our own standards and methods. I’m sure people have done exactly that. However, I’m lazy, and not terribly bright. Rather than try to invent a whole new process, I’m going to look to my developer roots and choose instead to emulate the developers. They’re sitting over there across the hall from me working with SCRUM/Agile/Waterfall/Object Driven/Feature Driven/Test Driven development processes that they’ve been polishing for years. What if I just started working on database development the same way they work on code development? Win! Ah, but now I have to have a mechanism for treating my database like application code. First, I need a method for getting it into source control. That’s where Red Gate’s SQL Source Control comes into the picture. SQL Source Control works within SQL Server Management Studio to connect your database objects up to the source control system of your choice. Right out of the box SQL Source Control can link to TFS, SVN or Vault. With a little work you can connect it to Git or just about any other source control system. With the ability to get my database into source control, a lot of possibilities for more direct integration with the application development teams open up.

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Open Your Windows - 4/Maio/10

    - by Claudia Costa
    This FREE technical briefing is designed to show ISVs/SIs how to leverage the Oracle11g Technology especially in the small to medium business. The briefing focuses on Oracle's 11g platform on Windows & Linux and gives a very comprehensive technical competitive overview to the products offered by Microsoft. The technical part covers Integration and Migration aspects of various Microsoft products such as SQL Server, .NET and Active Directory. Register Today! With Oracle11g Oracle introduced various products (ApplicationExpress, OracleExpress Edition, ADF, BPEL) and licenses (Oracle Database Standard Edition One, Application Server Java Edition) specifically targetting the small to medium business market and to show that Oracle Database and Application Server are as easy to use and costs less than Microsoft products in terms of purchase price and ongoing support & maintenance and even much much less when considering the Linux platform.. For those ISVs have already adopted Microsoft .NET framework and using SQL Server as their database layer, we will demostrate that Oracle11g Database is as easy as SQL Server to install, configure, and manage. In addition to that, their application development .NET platform does not requires dramatic changes to enable it to run on the Oracle database. Besides the standard functionalities, Oracle has enhanced some of the advanced features; such as Intermedia, Security, Ref Cursor, etc., tightly integrated with .NET framework so that .NET developers can take full advantage of the Oracle technology, without worrying or programming the complexity components. Objectives ·         Understand Oracle's strategy and commitment on Windows & Linux ·         Learn how to migrate from SQL Server to Oracle on Windows AND Linux ·         Understand that Oracle11g is easy to manage and to install on Windows & Linux ·         Learn how to integrate Windows products with the Oracle11g Platform ·         Learn how Oracle products interoperate & integrate with Microsoft .NET ·         Learn how an Oracle database on Windows will easily be ported to a lower cost Linux database platform and interoperate with a .NET application Prerequisites General Operating System expertise including MS-Windows and Linux. Agenda ·         Welcome and Intro ·         Oracle at a glance ·         Strategy; Small to Medium Business, Microsoft and Linux ·         Oracle 11g Architecture on Linux & Windows ·         Managing Oracle 11g on Linux & Windows ·         Application Development ·         Migration ·         Value propositions for ISVs & Wrap-up   ---------------------------------------------------------------------------- Para mais informações/inscrições, contacte: [email protected].

    Read the article

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >