Search Results

Search found 21224 results on 849 pages for 'dynamic array'.

Page 655/849 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Posting XML via curl (command-line) without using key/value pairs

    - by Mathias Bynens
    Consider a PHP script that outputs all POST data, as follows: <?php var_dump($_POST); ?> The script is located at http://example.com/foo.php. Now, I want to post XML data to it (without using key/value pairs) using command line curl. I have tried many variations of the following: curl -H "Content-type: text/xml; charset=utf-8" --data-urlencode "<foo><bar>bazinga</foo></bar>" http://example.com/foo.php Yet none of them seem to actually post anything — according to the PHP script, $_POST is just an empty array. What am I doing wrong here?

    Read the article

  • Deserialization of JSON object by using DataContractJsonSerializer in C#

    - by user2539667
    enter code hereI'm sure this question has been asked over and over again, but for some reason, I still can't manage to get this to work. I want to deserialize a JSON object that contains a single member; a string array: [{"idTercero":"cod_Tercero"}] This is the class that I'm trying to deserialize into: [DataContract] public class rptaOk { [DataMember] public string idTercero { get; set; } public rptaOk() { } public rptaOk(string idTercero) { this.idTercero = idTercero; } } This is the method that I try to deserialize: public T Deserialise<T>(string json) { DataContractJsonSerializer deserializer = new DataContractJsonSerializer(typeof(T)); using (MemoryStream stream = new MemoryStream(Encoding.Unicode.GetBytes(json))) { T result = (T)deserializer.ReadObject(stream); return result; } } And so try to fill the object: rptaOk deserializedRpta = deserializarOk(rpta); But for some reason, this returns "" MessageBox.Show(deserializedRpta.idTercero);

    Read the article

  • Create a Generic IEnumerable<T> given a IEnumerable and the member datatypes

    - by ilias
    Hi, I get an IEnumerable which I know is a object array. I also know the datatype of the elements. Now I need to cast this to an IEnumerable<T, where T is a supplied type. For instance IEnumerable results = GetUsers(); IEnumerable<T> users = ConvertToTypedIEnumerable(results, typeof(User)); I now want to cast/ convert this to IEnumerable<User. Also, I want to be able to do this for any type. I cannot use IEnumerable.Cast<, because for that I have to know the type to cast it to at compile time, which I don't have. I get the type and the IEnumerable at runtime. - Thanks

    Read the article

  • C# - Can FileHelper FieldConverter routines refer to other fields in the record?

    - by Pete
    I am using the excellent FileHelpers library to process a fixed-length airline schedule file. I have a date field, then a few fields later on in the record, a time field. I want to combine both of these in the FileHelpers record class, and know there is a custom FieldConverter attribute. With this attribute, you provide a custom function to handle your field data and implement StringToField and FieldToString. My question is: can I pass other fields (already read) to this customer FieldConverter too, so I can combine Date and Time together. FieldConverter has an implementation that allows you to refer to both a custom processing class AND 'other strings' or even an array of object. But, given this is done in the attribute definition, I am struggling to access this earlier-field reference. [FieldFixedLength(4)] [FieldConverter(typeof(MyTimeConverter),"eg. ScheduledDepartureDate")] public DateTime scheduledDepartureTime;

    Read the article

  • Graphing new users by date in a Rails app using Seer

    - by Danger Angell
    I'd like to implement a rolling graph showing new users by day over the last 7 days using Seer. I've got Seer installed: http://www.idolhands.com/ruby-on-rails/gems-plugins-and-engines/graphing-for-ruby-on-rails-with-seer I'm struggling to get my brain around how to implement. I've got an array of the Users I want to plot: @users = User.all( :conditions = {:created_at = 7.days.ago..Time.zone.now}) Can't see the right way to implement the :data_method to roll them up by created_at date. Anyone done this or similar with Seer? Anyone smarter than me able to explain this after looking at the Seer sample page (linked above)?

    Read the article

  • Python bindings for C++ code using OpenCV giving segmentation fault

    - by lightalchemist
    I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter. I've looked at OpenCV's source and found the file cv2.cpp which has conversions functions to perform conversions to and fro between PyObject* and OpenCV's Mat. I made use of those conversions functions but got a segmentation fault when I tried to use them. I basically need some suggestions/sample code/online references on how to interface python and C++ code that make use of OpenCV, specifically with the ability to return OpenCV's C++ Mat to the python interpreter or perhaps suggestions on how/where to start investigating the cause of the segmentation fault. Currently I'm using Boost Python to wrap the code. Thanks in advance to any replies. The relevant code: // This is the function that is giving the segmentation fault. PyObject* ABC::doSomething(PyObject* image) { Mat m; pyopencv_to(image, m); // This line gives segmentation fault. // Some code to create cppObj from CPP library that uses OpenCV cv::Mat processedImage = cppObj->align(m); return pyopencv_from(processedImage); } The conversion functions taken from OpenCV's source follows. The conversion code gives segmentation fault at the commented line with "if (!PyArray_Check(o)) ...". static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) // Segmentation fault inside PyArray_Check(o) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); } My python test program: import pysomemodule # My python wrapped library. import cv2 def main(): myobj = pysomemodule.ABC("faces.train") # Create python object. This works. image = cv2.imread('61.jpg') processedImage = myobj.doSomething(image) cv2.imshow("test", processedImage) cv2.waitKey() if __name__ == "__main__": main()

    Read the article

  • Getting Started with Prism (aka Composite Application Guidance for WPF and Silverlight)

    - by dotneteer
    Overview Prism is a framework from the Microsoft Patterns and Practice team that allow you to create WPF and Silverlight in a modular way. It is especially valuable for larger projects in which a large number of developers can develop in parallel. Prism achieves its goal by supplying several services: · Dependency Injection (DI) and Inversion of control (IoC): By using DI, Prism takes away the responsibility of instantiating and managing the life time of dependency objects from individual components to a container. Prism relies on containers to discover, manage and compose large number of objects. By varying the configuration, the container can also inject mock objects for unit testing. Out of the box, Prism supports Unity and MEF as container although it is possible to use other containers by subclassing the Bootstrapper class. · Modularity and Region: Prism supplies the framework to split application into modules from the application shell. Each module is a library project that contains both UI and code and is responsible to initialize itself when loaded by the shell. Each window can be further divided into regions. A region is a user control with associated model. · Model, view and view-model (MVVM) pattern: Prism promotes the user MVVM. The use of DI container makes it much easier to inject model into view. WPF already has excellent data binding and commanding mechanism. To be productive with Prism, it is important to understand WPF data binding and commanding well. · Event-aggregation: Prism promotes loosely coupled components. Prism discourages for components from different modules to communicate each other, thus leading to dependency. Instead, Prism supplies an event-aggregation mechanism that allows components to publish and subscribe events without knowing each other. Architecture In the following, I will go into a little more detail on the services provided by Prism. Bootstrapper In a typical WPF application, application start-up is controls by App.xaml and its code behind. The main window of the application is typically specified in the App.xaml file. In a Prism application, we start a bootstrapper in the App class and delegate the duty of main window to the bootstrapper. The bootstrapper will start a dependency-injection container so all future object instantiations are managed by the container. Out of box, Prism provides the UnityBootstrapper and MefUnityBootstrapper abstract classes. All application needs to either provide a concrete implementation of one of these bootstrappers, or alternatively, subclass the Bootstrapper class with another DI container. A concrete bootstrapper class must implement the CreateShell method. Its responsibility is to resolve and create the Shell object through the DI container to serve as the main window for the application. The other important method to override is ConfigureModuleCatalog. The bootstrapper can register modules for the application. In a more advance scenario, an application does not have to know all its modules at compile time. Modules can be discovered at run time. Readers to refer to one of the Open Modularity Quick Starts for more information. Modules Once modules are registered with or discovered by Prism, they are instantiated by the DI container and their Initialize method is called. The DI container can inject into a module a region registry that implements IRegionViewRegistry interface. The module, in its Initialize method, can then call RegisterViewWithRegion method of the registry to register its regions. Regions Regions, once registered, are managed by the RegionManager. The shell can then load regions either through the RegionManager.RegionName attached property or dynamically through code. When a view is created by the region manager, the DI container can inject view model and other services into the view. The view then has a reference to the view model through which it can interact with backend services. Service locator Although it is possible to inject services into dependent classes through a DI container, an alternative way is to use the ServiceLocator to retrieve a service on demard. Prism supplies a service locator implementation and it is possible to get an instance of the service by calling: ServiceLocator.Current.GetInstance<IServiceType>() Event aggregator Prism supplies an IEventAggregator interface and implementation that can be injected into any class that needs to communicate with each other in a loosely-coupled fashion. The event aggregator uses a publisher/subscriber model. A class can publishes an event by calling eventAggregator.GetEvent<EventType>().Publish(parameter) to raise an event. Other classes can subscribe the event by calling eventAggregator.GetEvent<EventType>().Subscribe(EventHandler, other options). Getting started The easiest way to get started with Prism is to go through the Prism Hands-On labs and look at the Hello World QuickStart. The Hello World QuickStart shows how bootstrapper, modules and region works. Next, I would recommend you to look at the Stock Trader Reference Implementation. It is a more in depth example that resemble we want to set up an application. Several other QuickStarts cover individual Prism services. Some scenarios, such as dynamic module discovery, are more advanced. Apart from the official prism document, you can get an overview by reading Glen Block’s MSDN Magazine article. I have found the best free training material is from the Boise Code Camp. To be effective with Prism, it is important to understands key concepts of WPF well first, such as the DependencyProperty system, data binding, resource, theme and ICommand. It is also important to know your DI container of choice well. I will try to explorer these subjects in depth in the future. Testimony Recently, I worked on a desktop WPF application using Prism. I had a wonderful experience with Prism. The Prism is flexible enough even in the presence of third party controls such as Telerik WPF controls. We have never encountered any significant obstacle.

    Read the article

  • Silverlight 4 - authentiation / authorization against custom wcf service

    - by Calanus
    I have a wcf service in front of an AzMan store that passes roles and operations to clients using the following interface: [OperationContract] bool AuthenticateUser(string password, string appName); [OperationContract] string[] GetRoles(string storelocation, string appName); [OperationContract] string[] GetOperations(string storeLocation, string appName, string selectedRole); Clients connect to this service using windows authentication (but users must send their password through to reaffirm their identity). Ultimately the service delivers an array of operations that each client can perform based on their selected role. I've opened a new Silverlight Business Application and tried to understand how authentication/authorization works in this template, as well as scoured the web to find examples to how to hook my webservice to the login box already created in the template, but I am completely at a loss as how to do this! Can anyone offer any advice?

    Read the article

  • Call named routes in CakePHP as the same way in Ruby on Rails

    - by Lucas Renan
    How can I call a route (in the view) in CakePHP as the same way in Rails? Ruby on Rails routes.rb map.my_route '/my-route', :controller => 'my_controller', :action => 'index' view link_to 'My Route Name', my_route_path CakePHP routes.php Router::connect('/my-route', array('controller' => 'my_controller', 'action' => 'index')); view $html->link('My Route Name', '/my-route'); But I think the Rails way is better, because I can make changes in the "url" and I don't need changes the code of the views.

    Read the article

  • Create a permalink with Javascript

    - by Jon Romero
    I have a textbox where a user puts a string like this: "hello world! I think that __i__ am awesome (yes I am!)" I need to create a correct url like this: hello-world-i-think-that-i-am-awesome-yes-i-am How can be done using reg expressions? Also, is it possible to do it with Greek (for example)? "Ge?a s?? ??sµe" turns to geia-sou-kosme In other programming languages (python/ruby) I am using a translation array. Should I do the same here?

    Read the article

  • JNA-Mapping Delphi Function

    - by Florian
    Hi! How do i map this function with JNA: Delphi code: function getData(InData1: PChar; InData2: PChar; Data: TArray16; var OutData1: PChar; var OutData2: PChar): integer; stdcall; with: TArray16 = array[0..15] of char; The int value that is returned can be 0 for Error or 1 for right execution; My suggestion is: Java code: int getData(String inData1, String inData2, byte[] data, byte[] outData1 byte[] outData2); The problem is that the function of the dll returns 0. I also tried other Datatypes, but it hasn't worked jet. I think the problem is that the dll function can't write to the parameters outData1 and outData2. Who can help me?....Thanks!!

    Read the article

  • UIToolbar UIBarButtonItem Alignment question

    - by Luther Baker
    I need to create a UIToolbar that has two UIBarButtonItems. The 1st button must be centered and the 2nd item must be right aligned. I understand and use Flexible spacing and it works great when I need to balance buttons across the UIToolbar, but with only two buttons, I can't seem to perfectly center the middle button. I've even initialized the view.toolbarItems array with NSArray *items = [[NSArray alloc] initWithObjects:fixed, flex, center_button, flex, right_button, nil]; and set fixed.width = right_button.width ... but still, the center_button is never perfectly centered.

    Read the article

  • Datamining on a mysql database

    - by sliptix
    Hello, I Begin with textmining. I have two database tables with thousands of data.. a table for "skills" and a table for "skills categories" every "skill" belongs to a skills categorie. a "skill" is , physicaly, a varchar(200) field in the database, where there is some text describing the skill. Here are some skills extracted from the skills table: "PHP (good level), Java (intermediaite), C++" "PHP5" "project management and quality management" "begining Javascript" "water engineering" "dfsdf zerze rzer" "cibling customers" what i want to do is to extract knowledge from those fields, i mean extract only the real skill and ignore the rest of useless text. for the above example i want to get only an array with: "PHP" "Java" "C++" "PHP5" "project management" "quality management" "Javascript" "water engineering" "cibling customers" what should i do to extract the skills from tons of data please ? do you know specific algorithms to do this ? ex : k-means ... ? Thanks in advance.

    Read the article

  • How Do You Insert Large Blobs Into Oracle 10G Using System.Data.OracleClient?

    - by discwiz
    Trying to insert 315K Gif files into an Oracle 10g database. Everytime I get this error "ora-01460: unimplemented or unreasonable conversion requested" whe I run the stored procedure. It appears that there is a 32K limit if I use a stored procedure. I read online that this does not apply if you are doing a direct insert, but I do not know how to create the insert string for a Byte Array. This is a thick client running on the server so not worried about SQL Injection attacks. Any help would be greatly appreciated. FYI, code in vb.net. Thanks, Dave

    Read the article

  • Json.NET - How to serialize a class using custom resolver

    - by Mendy
    I want to serialize this class: public class CarDisplay { public string Name { get; set; } public string Brand { get; set; } public string Year { get; set; } public PictureDisplay[] Pictures { get; set; } } public class PictureDisplay { public int Id { get; set; } public string SecretKey { get; set; } public string AltText { get; set; } } To this Json test: { Name: "Name value", Brand: "Brand value", Year: "Year value", Pictures: ["url1", "url2", "url3"] } Note that each Car have an pictures array with only url string, instead of all the properties that Picture class have. I know that Json.NET have the notion of Custom Resolver, but I don't sure exactly how to use it.

    Read the article

  • Creating line graph/chart in vb.net (VS2008)

    - by typoknig
    I am reluctant to ask this question because a lot of similar questions have been asked, but after reading through them I am not getting the info I need. I am trying to follow this tutorial and I think it is going to work ok, but I have a lot of data to put in and the tutorial has the reader create the chart data points manually. I want the data points to be generated from an integer which can change while the program is running (thus the chart size needs to change) and the y coordinate of the data points needs to come from an array. I have attempted to "bind" the data but I am messing it up somehow and I don't even think that is the best way to do what I want. Also, I do not have to use the methods suggested in the tutorial, I am looking for the highest quality most efficient way to generate a line graph in vb.net (VS2008) based on the criteria I previously mentioned.

    Read the article

  • More Animation - Self Dismissing Dialogs

    - by Duncan Mills
    In my earlier articles on animation, I discussed various slide, grow and  flip transitions for items and containers.  In this article I want to discuss a fade animation and specifically the use of fades and auto-dismissal for informational dialogs.  If you use a Mac, you may be familiar with Growl as a notification system, and the nice way that messages that are informational just fade out after a few seconds. So in this blog entry I wanted to discuss how we could make an ADF popup behave in the same way. This can be an effective way of communicating information to the user without "getting in the way" with modal alerts. This of course, has been done before, but everything I've seen previously requires something like JQuery to be in the mix when we don't really need it to be.  The solution I've put together is nice and generic and will work with either <af:panelWindow> or <af:dialog> as a the child of the popup. In terms of usage it's pretty simple to use we  just need to ensure that the popup itself has clientComponent is set to true and includes the animation JavaScript (animateFadingPopup) on a popupOpened event: <af:popup id="pop1" clientComponent="true">   <af:panelWindow title="A Fading Message...">    ...  </af:panelWindow>   <af:clientListener method="animateFadingPopup" type="popupOpened"/> </af:popup>   The popup can be invoked in the normal way using showPopupBehavior or JavaScript, no special code is required there. As a further twist you can include an additional clientAttribute called preFadeDelay to define a delay before the fade itself starts (the default is 5 seconds) . To set the delay to just 2 seconds for example: <af:popup ...>   ...   <af:clientAttribute name="preFadeDelay" value="2"/>   <af:clientListener method="animateFadingPopup" type="popupOpened"/>  </af:popup> The Animation Styles  As before, we have a couple of CSS Styles which define the animation, I've put these into the skin in my case, and, as in the other articles, I've only defined the transitions for WebKit browsers (Chrome, Safari) at the moment. In this case, the fade is timed at 5 seconds in duration. .popupFadeReset {   opacity: 1; } .popupFadeAnimate {   opacity: 0;   -webkit-transition: opacity 5s ease-in-out; } As you can see here, we are achieving the fade by simply setting the CSS opacity property. The JavaScript The final part of the puzzle is, of course, the JavaScript, there are four functions, these are generic (apart from the Style names which, if you've changed above, you'll need to reflect here): The initial function invoked from the popupOpened event,  animateFadingPopup which starts a timer and provides the initial delay before we start to fade the popup. The function that applies the fade animation to the popup - initiatePopupFade. The callback function - closeFadedPopup used to reset the style class and correctly hide the popup so that it can be invoked again and again.   A utility function - findFadeContainer, which is responsible for locating the correct child component of the popup to actually apply the style to. Function - animateFadingPopup This function, as stated is the one hooked up to the popupOpened event via a clientListener. Because of when the code is called it does not actually matter how you launch the popup, or if the popup is re-used from multiple places. All usages will get the fade behavior. /**  * Client listener which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param event  */ function animateFadingPopup(event) { var fadePopup = event.getSource();   var fadeCandidate = false;   //Ensure that the popup is initially Opaque   //This handles the situation where the user has dismissed   //the popup whilst it was in the process of fading   var fadeContainer = findFadeContainer(fadePopup);   if (fadeContainer != null) {     fadeCandidate = true;     fadeContainer.setStyleClass("popupFadeReset");   }   //Only continue if we can actually fade this popup   if (fadeCandidate) {   //See if a delay has been specified     var waitTimeSeconds = event.getSource().getProperty('preFadeDelay');     //Default to 5 seconds if not supplied     if (waitTimeSeconds == undefined) {     waitTimeSeconds = 5;     }     // Now call the fade after the specified time     var fadeFunction = function () {     initiatePopupFade(fadePopup);     };     var fadeDelayTimer = setTimeout(fadeFunction, (waitTimeSeconds * 1000));   } } The things to note about this function is the initial check that we have to do to ensure that the container is currently visible and reset it's style to ensure that it is.  This is to handle the situation where the popup has begun the fade, and yet the user has still explicitly dismissed the popup before it's complete and in doing so has prevented the callback function (described later) from executing. In this particular situation the initial display of the dialog will be (apparently) missing it's normal animation but at least it becomes visible to the user (and most users will probably not notice this difference in any case). You'll notice that the style that we apply to reset the  opacity - popupFadeReset, is not applied to the popup component itself but rather the dialog or panelWindow within it. More about that in the description of the next function findFadeContainer(). Finally, assuming that we have a suitable candidate for fading, a JavaScript  timer is started using the specified preFadeDelay wait time (or 5 seconds if that was not supplied). When this timer expires then the main animation styleclass will be applied using the initiatePopupFade() function Function - findFadeContainer As a component, the <af:popup> does not support styleClass attribute, so we can't apply the animation style directly.  Instead we have to look for the container within the popup which defines the window object that can have a style attached.  This is achieved by the following code: /**  * The thing we actually fade will be the only child  * of the popup assuming that this is a dialog or window  * @param popup  * @return the component, or null if this is not valid for fading  */ function findFadeContainer(popup) { var children = popup.getDescendantComponents();   var fadeContainer = children[0];   if (fadeContainer != undefined) {   var compType = fadeContainer.getComponentType();     if (compType == "oracle.adf.RichPanelWindow" || compType == "oracle.adf.RichDialog") {     return fadeContainer;     }   }   return null; }  So what we do here is to grab the first child component of the popup and check its type. Here I decided to limit the fade behaviour to only <af:dialog> and <af:panelWindow>. This was deliberate.  If  we apply the fade to say an <af:noteWindow> you would see the text inside the balloon fade, but the balloon itself would hang around until the fade animation was over and then hide.  It would of course be possible to make the code smarter to walk up the DOM tree to find the correct <div> to apply the style to in order to hide the whole balloon, however, that means that this JavaScript would then need to have knowledge of the generated DOM structure, something which may change from release to release, and certainly something to avoid. So, all in all, I think that this is an OK restriction and frankly it's windows and dialogs that I wanted to fade anyway, not balloons and menus. You could of course extend this technique and handle the other types should you really want to. One thing to note here is the selection of the first (children[0]) child of the popup. It does not matter if there are non-visible children such as clientListener before the <af:dialog> or <af:panelWindow> within the popup, they are not included in this array, so picking the first element in this way seems to be fine, no matter what the underlying ordering is within the JSF source.  If you wanted a super-robust version of the code you might want to iterate through the children array of the popup to check for the right type, again it's up to you.  Function -  initiatePopupFade  On to the actual fading. This is actually very simple and at it's heart, just the application of the popupFadeAnimate style to the correct component and then registering a callback to execute once the fade is done. /**  * Function which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param popup the popup we are animating  */ function initiatePopupFade(popup) { //Only continue if the popup has not already been dismissed    if (popup.isPopupVisible()) {   //The skin styles that define the animation      var fadeoutAnimationStyle = "popupFadeAnimate";     var fadeAnimationResetStyle = "popupFadeReset";     var fadeContainer = findFadeContainer(popup);     if (fadeContainer != null) {     var fadeContainerReal = AdfAgent.AGENT.getElementById(fadeContainer.getClientId());       //Define the callback this will correctly reset the popup once it's disappeared       var fadeCallbackFunction = function (event) {       closeFadedPopup(popup, fadeContainer, fadeAnimationResetStyle);         event.target.removeEventListener("webkitTransitionEnd", fadeCallbackFunction);       };       //Initiate the fade       fadeContainer.setStyleClass(fadeoutAnimationStyle);       //Register the callback to execute once fade is done       fadeContainerReal.addEventListener("webkitTransitionEnd", fadeCallbackFunction, false);     }   } } I've added some extra checks here though. First of all we only start the whole process if the popup is still visible. It may be that the user has closed the popup before the delay timer has finished so there is no need to start animating in that case. Again we use the findFadeContainer() function to locate the correct component to apply the style to, and additionally we grab the DOM id that represents that container.  This physical ID is required for the registration of the callback function. The closeFadedPopup() call is then registered on the callback so as to correctly close the now transparent (but still there) popup. Function -  closeFadedPopup The final function just cleans things up: /**  * Callback function to correctly cancel and reset the style in the popup  * @param popup id of the popup so we can close it properly  * @param contatiner the window / dialog within the popup to actually style  * @param resetStyle the syle that sets the opacity back to solid  */ function closeFadedPopup(popup, container, resetStyle) { container.setStyleClass(resetStyle);   popup.cancel(); }  First of all we reset the style to make the popup contents opaque again and then we cancel the popup.  This will ensure that any of your user code that is waiting for a popup cancelled event will actually get the event, additionally if you have done this as a modal window / dialog it will ensure that the glasspane is dismissed and you can interact with the UI again.  What's Next? There are several ways in which this technique could be used, I've been working on a popup here, but you could apply the same approach to in-line messages. As this code (in the popup case) is generic it will make s pretty nice declarative component and maybe, if I get time, I'll look at constructing a formal Growl component using a combination of this technique, and active data push. Also, I'm sure the above code can be improved a little too.  Specifically things like registering a popup cancelled listener to handle the style reset so that we don't loose the subtle animation that takes place when the popup is opened in that situation where the user has closed the in-fade dialog.

    Read the article

  • Efficient 4x4 matrix inverse (affine transform)

    - by Budric
    Hi, I was hoping someone can point out an efficient formula for 4x4 affine matrix transform. Currently my code uses cofactor expansion and it allocates a temporary array for each cofactor. It's easy to read, but it's slower than it should be. Note, this isn't homework and I know how to work it out manually using 4x4 co-factor expansion, it's just a pain and not really an interesting problem for me. Also I've googled and came up with a few sites that give you the formula already (http://www.euclideanspace.com/maths/algebra/matrix/functions/inverse/fourD/index.htm). However this one could probably be optimized further by pre-computing some of the products. I'm sure someone came up with the "best" formula for this at one point or another? Thanks.

    Read the article

  • How do I translate this Matlab bsxfun call to R?

    - by claytontstanley
    I would also (fingers crossed) like the solution to work with R Sparse Matrices in the Matrix package. >> A = [1,2,3,4,5] A = 1 2 3 4 5 >> B = [1;2;3;4;5] B = 1 2 3 4 5 >> bsxfun(@times, A, B) ans = 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25 >> EDIT: I would like to do a matrix multiplication of these sparse vectors, and return a sparse array: > class(NRowSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > class(NColSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > NRowSums * NColSums (I think) w/o using a non-sparse variable to temporarily store data.

    Read the article

  • How do I implement a dispatch table in a Perl OO module?

    - by Iain
    I want to put some subs that are within an OO package into an array - also within the package - to use as a dispatch table. Something like this package Blah::Blah; use fields 'tests'; sub new { my($class )= @_; my $self = fields::new($class); $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; return $self; } _sub1 { ... }; _sub2 { ... }; I'm not entirely sure on the syntax for this? $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; or $self->{'tests'} = [ \&{$self->_sub1} ,\&{$self->_sub2} ]; or $self->{'tests'} = [ \&{_sub1} ,\&{_sub2} ]; I don't seem to be able to get this to work within an OO package, whereas it's quite straightforward in a procedural fashion, and I haven't found any examples for OO. Any help is much appreciated, Iain

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Convert old NuSoap code into PHP core soap functions

    - by Enrique
    Hi I've been testing nuSoap with codeIgniter (PHP Framework) but seems nuSoap isn't prepared to work with latest php 5.3, even if I download a patched nusoap version for php 5.3 I have the following code: require_once(APPPATH.'libraries/NuSOAP/lib/nusoap'.EXT); //includes nusoap $n_params = array('CityName' => 'San Juan', 'CountryName' => 'Argentina'); $client = new nusoap_client('http://www.webservicex.net/globalweather.asmx?WSDL'); $client->setHTTPProxy("10.2.0.1",6588,"",""); $result = $client->call('GetWeather', $n_params); Can any1 help me to convert these functions into php soap functions? Including proxy function? Thanks a lot

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • How do you give variables reference in javascript?

    - by Eric
    I want to give variables reference in javascript. For example, I want to do: a=1 b=a a=2 and have b=2, and change accordingly to a. Is this possible in javascript? If it isn't is there a way to do like a.onchange = function () {b=a}? What I wanted to do was make a function like makeobject which make an object and puts it in an array and than returns it like function makeobject() { objects[objects.length] = {blah:'whatever',foo:8}; } so than I could do a=makeobject() b=makeobject() c=makeobject() and later in the code do for (i in objects) { objects[i].blah = 'whatev'; } and also change the values of a,b and c

    Read the article

  • What is the C# equivalent of this Excel VBA code for Shapes?

    - by code4life
    This is the VBA code for an Excel template, which I'm trying to convert to C# in a VSTO project I'm working on. By the way, it's a VSTO add-in: Dim addedShapes() As Variant ReDim addedShapes(1) addedShapes(1) = aBracket.Name ReDim Preserve addedShapes(UBound(addedShapes) + 1) addedShapes(UBound(addedShapes)) = "unique2" Set tmpShape = Me.Shapes.Range(addedShapes).Group At this point, I'm stumped by the addedShapes(), not sure what this is all about. Update: Matti mentioned that addedShapes() represents a variant array in VBA. So now I'm wondering what the contents of addedShapes() should be. Would this be the correct way to call the Shapes.Range() call in C#? List<string> addedShapes = new List<string>(); ... Shape tmpShape = worksheet.Shapes.get_Range (addedShapes.Cast<object>().ToArray()).Group(); I'd appreciate anyone who's worked with VBA and C# willing to make a comment on my question & problem!

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >