Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 469/945 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • wget not completely processing the http call

    - by user578458
    Here is a wget command that executes a HTML / PHP stack report suite that is hosted by a third party - we don't have control over the PHP or HTML page wget --no-check-certificate --http-user=/myacc --http-password=mypass -O /tmp/myoutput.csv "https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" All the elements are working perfectly: --http-user / --http-pass as offered by a browsers standard popup for username and password prompt -O /tmp/myoutput.csv - the output file of interest https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" The file generated on the fly by the parameters myrepcode=9999 - a reference to the report in question action=exportcsv internally written in the function admin=myappuserid the third party operats SSL to access the site - then internal username and password stored in a database to access the functions of the site) password=myappuserpass startdate=2011-01-16 this and end data are parameters specific to the report 9999 enddate=2011-01-16 reportby=mypreferredview This is an option in the report that facilitates different levels of detail or aggregation The problem is that the reportby parameter is a radio button selection in a list of 5 selections (sure I enough the default is highest level of aggregation , I want the last one which is the most detailed) Here is a sample of the HTML page code for the options of reportby View by The Default My Least Preferred My Second Least Preferred My Third Least Preferred My Preferred No matter which of the reportby items I select in the wget statement - thedefault is always executed. Questions 1) Has anyone come across this notation in HTML (id=inputname[inputelement]) I spoke to a senior web developer and he has never seen this notation for inputs (id=inputname[inputelement]) - and w3schools do not appear familiar with this either based on an extensive search 2) Can a wget command select a none default radio item when executing the command ? This probably will be initially received with a "Use CURL" response- however the wget approach works very well in the limited environment I am operating in - particularly as I need to download 10000 of these such items. Thanks ahead of response

    Read the article

  • C++, inject additional data in a method

    - by justik
    I am adding the new modul in some large library. All methods here are implemented as static. Let mi briefly describe the simplified model: typedef std::vector<double> TData; double test ( const TData &arg ) { return arg ( 0 ) * sin ( arg ( 1 ) + ...;} double ( * p_test ) ( const TData> &arg) = &test; class A { public: static T f1 (TData &input) { .... //some computations B::f2 (p_test); } }; Inside f1() some computations are perfomed and a static method B::f2 is called. The f2 method is implemented by another author and represents some simulation algorithm (example here is siplified). class B { public: static double f2 (double ( * p_test ) ( const TData &arg ) ) { //difficult algorithm working p_test many times double res = p_test(arg); } }; The f2 method has a pointer to some weight function (here p_test). But in my case some additional parameters computed in f1 for test() methods are required double test ( const TData &arg, const TData &arg2, char *arg3.... ) { } How to inject these parameters into test() (and so to f2) to avoid changing the source code of the f2 methods (that is not trivial), redesign of the library and without dirty hacks :-) ? The most simple step is to override f2 static double f2 (double ( * p_test ) ( const TData &arg ), const TData &arg2, char *arg3.... ) But what to do later? Consider, that methods are static, so there will be problems with objects. Thanks for your help.

    Read the article

  • SQL Server find and replace in TEXT field

    - by incubushead
    I have a database in SQL Server 2005 that was brought up from SQL Server 2000 and is still using TEXT type fields instead of varchar(max). I need to find and replace a string of characters in the text field but all of the examples of how to do this that I have found don't seem like they would work for me. It seems the UPDATETEXT command requires that the two parameters "insert_offset" and "delete_length" be set explicitly but the string i am searching for could show up in the text at any point or even at several points in the same cell. My understanding of these two parameters is that the string im searching for will always be in the same place, so that insert_offset is the number of spaces into the text that the UPDATETEXT command will start replacing text. Example: Need to find: &lt;u&gt; and Replace it with: <u> Text field example: *Everyone in the room was <b>&lt;u&gt;tired&lt;/u&gt;.</b><br>Then they woke <b>&lt;u&gt;up&lt;/u&gt;. Can anyone help me out with this? THANKS!

    Read the article

  • MVC route doesn't work

    - by bill
    Hi All, I have a bunch of Views in a default folder that represent single "static" pages. Everything works as advertised except i tried adding a new page yesterday.. using the exact same routing syntax and cannot for the life of me get it to work. here's is an example of a working route: routes.MapRoute( "OurProgram", // Route name "Our_Program", // URL with parameters new { controller = "Default", action = "OurProgram" } ); The filename is OurProgram and hitting http:// localhost/Our_Program/ opens the correct view which resides in the Views/Default folder. So i added another view into this folder: Views/Default/BuyNow.aspx and added the route: routes.MapRoute( "BuyNow", // Route name "Buy_Now", // URL with parameters new { controller = "Default", action = "BuyNow" } ); And this doesn't open. I have tried the "route debugger http://haacked.com/archive/2008/03/13/url-routing-debugger.aspx" and it correctly identifies the route. I am at a lose. I have tried re-creating the view.. I am using MVC.Net 2.0 and VS 10. Any help is appreciated!

    Read the article

  • What is a good "Error Checking" Pattern (Java)?

    - by Jack
    I'll explain what I mean by input error checking. Say you have a function doSomething(x). If the function completes successfully doSomething does something and returns nothing. However, if there are errors I'd like to be notified. That is what I mean by error checking. I'm looking for, in general, the best way to check for errors. I've thought of the following solutions, each with a potential problem. Flag error checking. If doSomething(x) completes successfully return null. Otherwise, it returns a boolean or an error string. Problem: Side effects. Throwing an exception. Throw an exception if doSomething(x) encounters an error. Problem: If you are performing error checking for parameters only, throwing an IllegalArgumentExceptionseems inappropriate. Validating input prior to function call. If the error checking is only meant for the parameters of the function, then you can call a validator function before calling the doSomething(x) function. Problem: What if a client of the class forgets to call the validator function before calling doSomething(x)? I often encounter this problem and any help or a point in the right direction would be much appreciated.

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • how to pass value to controller??

    - by rajesh
    When I try to pass url value to controller action, action is not getting the required value. I'm sending the value like this: function value(url,id) { alert(url); document.getElementById('rating').innerHTML=id; var params = 'artist='+id; alert(params); // var newurl='http://localhost/songs_full/public/eslresult/ratesong/userid/1/id/27'; var myAjax = new Ajax.Request(newurl,{method: 'post',parameters:params,onComplete: loadResponse}); //var myAjax = new Ajax.Request(url,{method:'POST',parameters:params,onComplete: load}); //alert(myAjax); } function load(http) { alert('success'); } and in the controller I have: public function ratesongAction() { $user=$_POST['rating']; echo $user; $post= $this->getRequest()->getPost(); //echo $post; $ratesongid= $this->_getParam('id'); } But still not getting the result. I am using zend framework.

    Read the article

  • Java servlet's request parameter's name set to entire json object

    - by Geren White
    I'm sending a json object through ajax to a java servlet. The json object is key-value type with three keys that point to arrays and a key that points to a single string. I build it in javascript like this: var jsonObject = {"arrayOne": arrayOne, "arrayTwo": arrayTwo, "arrayThree": arrThree, "string": stringVar}; I then send it to a java servlet using ajax as follows: httpRequest.open('POST', url, true); httpRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); httpRequest.setRequestHeader("Connection", "close"); var jsonString = jsonObject.toJSONString(); httpRequest.send(jsonString); This will send the string to my servlet, but It isn't showing as I expect it to. The whole json string gets set to the name of one of my request's parameters. So in my servlet if I do request.getParameterNames(); It will return an enumeration with one of the table entries' key's to be the entire object contents. I may be mistaken, but my thought was that it should set each key to a different parameter name. So I should have 4 parameters, arrayOne, arrayTwo, arrayThree, and string. Am I doing something wrong or is my thinking off here? Any help is appreciated. Thanks

    Read the article

  • Magic Method __set() on a Instanciated Object

    - by streetparade
    Ok i have a problem, sorry if i cant explaint it clear but the code speaks for its self. i have a class which generates objects from a given class name; Say we say the class is Modules: public function name($name) { $this->includeModule($name); try { $module = new ReflectionClass($name); $instance = $module->isInstantiable() ? $module->newInstance() : "Err"; $this->addDelegate($instance); } catch(Exception $e) { Modules::Name("Logger")->log($e->getMessage()); } return $this; } The AddDelegate Method: protected function addDelegate($delegate) { $this->aDelegates[] = $delegate; } The __call Method public function __call($methodName, $parameters) { $delegated = false; foreach ($this->aDelegates as $delegate) { if(class_exists(get_class($delegate))) { if(method_exists($delegate,$methodName)) { $method = new ReflectionMethod(get_class($delegate), $methodName); $function = array($delegate, $methodName); return call_user_func_array($function, $parameters); } } } The __get Method public function __get($property) { foreach($this->aDelegates as $delegate) { if ($delegate->$property !== false) { return $delegate->$property; } } } All this works fine expect the function __set public function __set($property,$value) { //print_r($this->aDelegates); foreach($this->aDelegates as $k=>$delegate) { //print_r($k); //print_r($delegate); if (property_exists($delegate, $property)) { $delegate->$property = $value; } } //$this->addDelegate($delegate); print_r($this->aDelegates); } class tester { public function __set($name,$value) { self::$module->name(self::$name)->__set($name,$value); } } Module::test("logger")->log("test"); // this logs, it works echo Module::test("logger")->path; //prints /home/bla/test/ this is also correct But i cant set any value to class log like this Module::tester("logger")->path ="/home/bla/test/log/"; The path property of class logger is public so its not a problem of protected or private property access. How can i solve this issue? I hope i could explain my problem clear.

    Read the article

  • Is there a maximum number of input controls that can be used on an HTML form?

    - by Rich
    I have an ambitious requirement for an asp.net 2.0 web page that contains a table (gridview), and each row in the grid contains 6 select (dropdown) controls for data entry. The number of rows that will be displayed is dependent upon the user's search parameters, which are specified in another area of the page. Unfortunately, with the default (and even basic) search parameters specified, the grid could contain several hundred rows. I've noticed that the browser, in this case IE8, starts behaving rather erratically once I reach a large number of rows -- no documented evidence for the number of rows where this begins to be a problem. For example, trying to view the source of the page results in a message from IE stating that there was a problem with the page that forced the browser to reload it, and I never get the source. Obviously the page loads and renders rather slowly also. I know that my solution is probably going to involve paging the gridview such that it only displays 20 or so rows per page, and I'll have to write code to handle the saving of changes in the dropdown values when the user changes pages. I can probably turn off viewstate on the gridview also. However, the question I really want to pose is this -- has anyone seen a documented rule indicating the maximum number of input controls that an HTML browser form is supposed to be able to contain? I could not find anything on the Internet after doing a search, and I suspect the answer may be whatever the browser can handle based on the machine configuration it is running on. Any rules of thumb you use? Thanks for any suggestions. Rich

    Read the article

  • JQuery/AJAX: Loading external DIVs using dynamic content

    - by ticallian
    I need to create a page that will load divs from an external page using Jquery and AJAX. I have come across a few good tutorials, but they are all based on static content, my links and content are generated by PHP. The main tutorial I am basing my code on is from: http://yensdesign.com/2008/12/how-to-load-content-via-ajax-in-jquery/ The exact function i need is as follows: Main page contains a permanent div listing some links containing a parameter. Upon click, link passes parameter to external page. External page filters recordset against parameter and populates div with results. The new div contains a new set of links with new parameters. The external div is loaded underneath the main pages first div. Process can then be repeated creating a chain of divs under each other. The last div in the chain will then direct to a new page collating all the previously used querystrings. I can handle all of the PHP work with populating the divs on the main and external pages. It's the JQuery and AJAX part i'm struggling with. $(document).ready(function(){ var sections = $('a[id^=link_]'); // Link that passes parameter to external page var content = $('div[id^=content_]'); // Where external div is loaded to sections.click(function(){ //load selected section switch(this.id){ case "div01": content.load("external.php?param=1 #section_div01"); break; case "div02": content.load("external.php?param=2 #section_div02"); break; } }); The problem I am having is getting JQuery to pass the dynamically generated parameters to the external page and then retrieve the new div. I can currently only do this with static links (As above).

    Read the article

  • Code producing System.NullReferenceException error for Membership.GetUser(). This is VB.Net (ASP.Net 4)

    - by Derrek
    I have a Default.aspx page that is not static. I have added functionality with datalist and sqldatasources. When a user logins he/she will see items like saved workouts, saved equipment, total replys, etc... This is based on getting the currently logged in user UserID. Quite simply this works great when the user is logged in. However, I do not want to force a user to login to view the Default page because it does have functionality on it that does not require login. When a user is not logged in of course I receive the [System.NullReferenceException] error. I understand the error well but I do not know how to code to fix it. That is where I need help. I will admit I am more designer than developer. However, I do know the exception error I am receivving is caused by me not setting a value in my code when a user is not logged in. I do not know how to do that and have for a week made unsuccessful attempts at writing the code. Both sets of code below compile for VB.Net/ASP.Net 4/Visual Studio 2010 without errors. However, I still get the System.NullReferenceException error if not logged in. I know it can be done but I do not know the right syntax. If you can help please insert you code in mine or write it out. JUST TELLING ME WHERE TO GO TO FIND AN ANSWER WON'T HELP. I HAVE DONE THAT FOR 7 STRAIGHT DAYS. I APPRECIATE OUR HELP. Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() If Not MemUser Is DBNull.Value Then UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End If End Sub -------------------------------------ORIGINAL CODE------------------------------- Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End Sub

    Read the article

  • Sinatra: rendering snippets (partials)

    - by Michael
    I'm following along with an O'Reilly book that's building a twitter clone with Sinatra. As Sinatra doesn't have 'partials' (in the way that Rails does), the author creates his own 'snippets' that work like partials. I understand that this is fairly common in Sinatra. Anyways, inside one of his snippets (see the first one below) he calls another snippet text_limiter_js (which is copied below). Text_limiter_js is basically a javascript function. If you look at the javascript function in text_limiter_js, you'll notice that it takes two parameters. I don't understand where these parameters are coming from because they're not getting passed in when text_limiter_js is rendered inside the other snippet. I'm not sure if I've given enough information/code for someone to help me understand this, but if you can, please explain. =snippet :'/snippets/text_limiter_js' %h2.comic What are you doing? %form{:method => 'post', :action => '/update'} %textarea.update.span-15#update{:name => 'status', :rows => 2, :onKeyDown => "text_limiter($('#update'), $('#counter'))"} .span-6 %span#counter 140 characters left .prepend-12 %input#button{:type => 'submit', :value => 'update'} text_limiter_js.haml :javascript function text_limiter(field,counter_field) { limit = 139; if (field.val().length > limit) field.val(field.val().substring(0, limit)); else counter_field.text(limit - field.val().length); }

    Read the article

  • Camera crashes in android 4.1(API level 16)

    - by Lincy
    My application has a camera functionality. It works fine in all Android version but now when i tested in S3 it crashes. The error points to this line: Parameters parameters = mCamera.getParameters(); Could someone provide a solution for this? The log is below: ?:??: W/?(?): java.lang.NullPointerException ?:??: W/?(?): at com.stpl.snapshun.camera.CameraActivity.surfaceChanged(CameraActivity.java:313) ?:??: W/?(?): at android.view.SurfaceView.updateWindow(SurfaceView.java:554) ?:??: W/?(?): at android.view.SurfaceView.access$000(SurfaceView.java:81) ?:??: W/?(?): at android.view.SurfaceView$3.onPreDraw(SurfaceView.java:169) ?:??: W/?(?): at android.view.ViewTreeObserver.dispatchOnPreDraw(ViewTreeObserver.java:671) ?:??: W/?(?): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1818) ?:??: W/?(?): at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:998) ?:??: W/?(?): at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:4212) ?:??: W/?(?): at android.view.Choreographer$CallbackRecord.run(Choreographer.java:725) ?:??: W/?(?): at android.view.Choreographer.doCallbacks(Choreographer.java:555) ?:??: W/?(?): at android.view.Choreographer.doFrame(Choreographer.java:525) ?:??: W/?(?): at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:711) ?:??: W/?(?): at android.os.Handler.handleCallback(Handler.java:615) ?:??: W/?(?): at android.os.Handler.dispatchMessage(Handler.java:92) ?:??: W/?(?): at android.os.Looper.loop(Looper.java:137) ?:??: W/?(?): at android.app.ActivityThread.main(ActivityThread.java:4745) ?:??: W/?(?): at java.lang.reflect.Method.invokeNative(Native Method) ?:??: W/?(?): at java.lang.reflect.Method.invoke(Method.java:511) ?:??: W/?(?): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) ?:??: W/?(?): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) ?:??: W/?(?): at dalvik.system.NativeStart.main(Native Method) Thanks in advance

    Read the article

  • (Java) Is there a type of object that can handle anything from primitives to arrays?

    - by Michael
    I'm pretty new to Java, so I'm hoping one of you guys knows how to do this. I'm having the user specify both the type and value of arguments, in any XML-like way, to be passed to methods that are external to my application. Example: javac myAppsName externalJavaClass methodofExternalClass [parameters] Of course, to find the proper method, we have to have the proper parameter types as the method may be overloaded and that's the only way to tell the difference between the different versions. Parameters are currently formatted in this manner: (type)value(/type), e.g. (int)71(/int) (string)This is my string that I'm passing as a parameter!(/string) I parse them, getting the constructor for whatever type is indicated, then execute that constructor by running its method, newInstance(<String value>), loading the new instance into an Object. This works fine and dandy, but as we all know, some methods take arrays, or even multi-dimensional arrays. I could handle the argument formatting like so: (array)(array)(int)0(/int)(int)1(/int)(/array)(array)(int)2(/int)(int)3(/int)(/array)(/array)... or perhaps even better... {{(int)0(/int)(int)1(/int)}{(int)2(/int)(int)3(/int)}}. The question is, how can this be implemented? Do I have to start wrapping everything in an Object[] array so I can handle primitives, etc. as argObj[0], but load an array as I normally would? (Unfortunately, I would have to make it an Object[][] array if I wanted to support two-dimensional arrays. This implementation wouldn't be very pretty.)

    Read the article

  • Trying to overload + operator

    - by FrostyStraw
    I cannot for the life of me understand why this is not working. I am so confused. I have a class Person which has a data member age, and I just want to add two people so that it adds the ages. I don't know why this is so hard, but I'm looking for examples and I feel like everyone does something different, and for some reason NONE of them work. Sometimes the examples I see have two parameters, sometimes they only have one, sometimes the parameters are references to the object, sometimes they're not, sometimes they return an int, sometimes they return a Person object. Like..what is the most normal way to do it? class Person { public: int age; //std::string haircolor = "brown"; //std::string ID = "23432598"; Person(): age(19) {} Person operator+(Person&) { } }; Person operator+(Person &obj1, Person &obj2){ Person sum = obj1; sum += obj2; return sum; } I really feel like overloading a + operator should seriously be the easiest thing in the world except I DON'T KNOW WHAT I AM DOING. I don't know if I'm supposed to create the overload function inside the class, outside, if it makes a difference, why if I do it inside it only allows one parameter, I just honestly don't get it.

    Read the article

  • Addiing captions above images in wordpress

    - by jacob
    So I added some code in my css and there are boxes that appear over every image that is attached to a post. I've wanted to number the images and show the image number in the box(1...n). I have this in my functions.php function count_images(){ global $post; $thePostID = $post-ID; $parameters = array( 'post_type' = 'attachment', 'post_parent' = $thePostID, 'post_mime_type' = 'image'); $attachments = get_children($parameters); $content = count($attachments); return $content; } add_filter('the_content','count_images'); function caption_image_callback($matches) { $c = count_images(); for ($i=1; $i <= $c; $i++) { if (is_single()) { return ''.$i.' '; } else { return ''; } } } function caption_image($post_body_content) { $post_body_content = preg_replace_callback("||","caption_image_callback",$post_body_content); return $post_body_content; } if ( current_user_can('edit_plugins') ) { add_filter('the_content', 'caption_image'); } If I run only count_images it will show the correct number of attached images to a post(let's say 15). But for some reason the number that is shown in the boxes over the images is always 1. I've seen this done on several blogs with just php so there has to be a way(even if I have to change my whole code). PS: I had to leave spaces in some places

    Read the article

  • Dynamic Image Caching with Java

    - by zteater
    I have a servlet with an API that delivers images from GET requests. The servlet creates a data file of CAD commands based on the parameters of the GET request. This data file is then delivered to an image parser, which creates an image on the file system. The servlet reads the image and returns the bytes on the response. All of the IO and the calling of the image parser program can be very taxing and images of around 80kb are rendering in 3-4000ms on a local system. There are roughly 20 parameters that make up the GET request. Each correlates to a different portion of the image. So, the combinations of possible images is extremely large. To alleviate the loading time, I plan to store BLOBs of rendered images in a database. If a GET request matches one previously executed, I will pull from cache. Else, I will render a new one. This does not fix "first-time" run, but will help "n+1 runs". Any other ideas on how I can improve performance?

    Read the article

  • How can I call these urls in jquery to display content on one page?

    - by Thorbis Website Design
    ok I figured out the jquery part but not the parameters of them all can anyone help figure out the parameters for each url string? this is the jquery I figured out! also would this work better then what the below answer? $.get('adminajax.php', {'action':'getUsers'}, function(data){ $('#users .users').html(data); }); He sent me this in an email: You can specify a page by adding: p=[page #] You can specify a file and it will add a checkbox next to the user which will be checked if the user has permission to download: file=[file location] adminajax.php?action=createDirectory&directory=[new directory location] adminajax.php?action=setAvailability&user=[username]&file=[filelocation]&available=[true or false] I'm trying to get it to display in these html tags: <div id="files"> <b>Files:</b> <ul class="files"></ul> </div> <div id="file_options"> <b>Options:</b> </div> <div id="users"> <b>Users:</b> <ul class="users"></ul> </div>

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • HTTP Push from SQL Server — Comet SQL

    Article provides example solution for presenting data in "real-time" from Microsoft SQL Server in HTML browser. Article presents how to implement Comet functionality in ASP.NET and how to connect Comet with Query Notification from SQL Server.

    Read the article

  • JavaScript filter PHP results

    - by Nick Maddren
    Hey guys for a while now I have been trying to come up with a method that can filter PHP results for listing items using JS. Look at these examples: http://www.autotrader.co.uk/search/used/cars/ http://www.vcars.co.uk/used-cars/ You will notice that the actual filter uses JavaScript however my question is how does it query the database to bring back the results? It obviously using PHP however how does the JS control what the PHP drags from the DB? Thanks

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >