Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 660/697 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • Call phpexcel from joomla

    - by Oscar Calderon
    i have a problem about phpexcel and joomla. I'm developing some filter form to load excel reports, so i used phpexcel library to do this. Right now i have only a report, it works fine, but after that i upload inside joomla using PHP pages component that allows me to put php files inside joomla and call it. When i put them, i change a little bit the form that calls the php that generates the excel report, i call the php using a link like this: h**p://www.whiblix.com/index.php?option=com_php&Itemid=24 That is, calling it from Joomla, not directly the php. If i wanna call the php directly i could use this path: h**p://www.whiblix.com/components/com_php/files/repImportaciones.php What's the problem? The problem is, when i call the php that generates the excel through joomla, the excel that is downloaded is corrupt and only shows symbols in one cell when i open it. But if i call the php directly the report is generated fine. I could call the php directly, the problem is that if i call it directly i can't use this line of code: defined( '_JEXEC' ) or die( 'Restricted access' ); That is used to deny the direct access to php from call it directly, because it doesn' work because the security. Where's the problem? This is the code of php that generates the report (ommiting the code where generates the rows and cells): <?php //defined( '_JEXEC' ) or die( 'Restricted access' ); /** Error reporting */ error_reporting(E_ALL); date_default_timezone_set('Europe/London'); require_once 'Classes/PHPExcel.php'; // Create new PHPExcel object $objPHPExcel = new PHPExcel(); // Set properties $objPHPExcel->getProperties()->setCreator("Maarten Balliauw") ->setLastModifiedBy("Maarten Balliauw") ->setTitle("Office 2007 XLSX Test Document") ->setSubject("Office 2007 XLSX Test Document") ->setDescription("Test document for Office 2007 XLSX, generated using PHP classes.") ->setKeywords("office 2007 openxml php") ->setCategory("Test result file"); // Rename sheet $objPHPExcel->getActiveSheet()->setTitle('Reporte de Importaciones'); // Set active sheet index to the first sheet, so Excel opens this as the first sheet $objPHPExcel->setActiveSheetIndex(0); // Redirect output to a client’s web browser (Excel5) header('Content-Type: application/vnd.ms-excel'); header('Content-Disposition: attachment;filename="repPrueba.xls"'); header('Cache-Control: max-age=0'); $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel5'); $objWriter->save('php://output'); exit;

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Creating .lib files in CUDA Toolkit 5

    - by user1683586
    I am taking my first faltering steps with CUDA Toolkit 5.0 RC using VS2010. Separate compilation has me confused. I tried to set up a project as a Static Library (.lib), but when I try to build it, it does not create a device-link.obj and I don't understand why. For instance, there are 2 files: A caller function that uses a function f #include "thrust\host_vector.h" #include "thrust\device_vector.h" using namespace thrust::placeholders; extern __device__ double f(double x); struct f_func { __device__ double operator()(const double& x) const { return f(x); } }; void test(const int len, double * data, double * res) { thrust::device_vector<double> d_data(data, data + len); thrust::transform(d_data.begin(), d_data.end(), d_data.begin(), f_func()); thrust::copy(d_data.begin(),d_data.end(), res); } And a library file that defines f __device__ double f(double x) { return x+2.0; } If I set the option generate relocatable device code to No, the first file will not compile due to unresolved extern function f. If I set it to -rdc, it will compile, but does not produce a device-link.obj file and so the linker fails. If I put the definition of f into the first file and delete the second it builds successfully, but now it isn't separate compilation anymore. How can I build a static library like this with separate source files? [Updated here] I called the first caller file "caller.cu" and the second "libfn.cu". The compiler lines that VS2010 outputs (which I don't fully understand) are (for caller): nvcc.exe -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" -clean and the same for libfn, then: nvcc.exe -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2010 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" and again for libfn.

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Create Duplicate Records on SELECT for Calendar Date Range

    - by peterallcdn
    Hey all, I've built a pretty shnazzy calendar system but there is one tweak that I need to make so that I'm completely happy with it. My calendar has three tables: calevents - The calendared event. caldates - The occurrences and date-range of each occurrence for each event. calcats - The categories that can be applied to an event. The short: For each calevent, there can be many caldates, one for each occurrence of calevent. So a calevent that repeats weekly and spans 3 days might have caldates like this: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-23 3 37 2010-06-28 2010-06-30 7 37 2010-07-05 2010-07-07 9 37 2010-07-12 2010-07-14 What I want to do, is when selecting all the caldates for a specified month such as 2010-06, to return not just the two records above, but instead a record for each date in the range of date_start and date_end for each caldate. So if I searched for 2010-06, I would get: date_id date_eid date_start date_end date_day 2 37 2010-06-21 2010-06-23 2010-06-21 2 37 2010-06-21 2010-06-23 2010-06-22 2 37 2010-06-21 2010-06-23 2010-06-23 3 37 2010-06-28 2010-06-30 2010-06-28 3 37 2010-06-28 2010-06-30 2010-06-29 3 37 2010-06-28 2010-06-30 2010-06-30 The Long: The reason I want to do this, is so when displaying a list of events(calevents) for a specified month, an occurrence(caldates) of that event will be displayed for EACH of the days it spans. I could do this with php by looping through each day of the current month and displaying a copy of each caldate if the month day falls between date_start and date_end. But doing it this way will prevent me from using record pagination if needed. For example, if for a specified month the following caldates were returned: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-27 94 53 2010-06-09 2010-07-08 Doing record pagination would see this as only 2 records("rows"). But looping through them with PHP would generate 29 "rows". So, I figure if I use mysql to create each row instead of PHP, I can achieve the same thing AND still be able to use pagination if a month has a lot of events/dates. As far as performance goes, I'm not sure which option is more efficient. Both would send the same amount of info to the browser, so it's really only the work required to generate the info that matters. My current query which fetches all the occurrences for a specified month, and to make things just a little more complicated... joins them with their event and category, looks like this: $sql_to_execute = " SELECT date_id, date_eid, date_start, date_end, event_id, event_title, event_category, event_private, event_location, SUBSTRING_INDEX(event_detailsstripped, ' ', 40) AS event_detailsstripped, event_time, event_starttime, event_endtime, event_active, cat_colour FROM ( caldates LEFT JOIN calevents ON caldates.date_eid = calevents.event_id ) LEFT JOIN calcats ON calevents.event_category = calcats.cat_id WHERE date_start <= '".mysql_real_escape_string($dbi_list_end_date)."' AND date_end >= '".mysql_real_escape_string($dbi_list_start_date)."' ".$dbi_category." ORDER BY date_start ASC "; Any help or advice would be greatly appreciated! Thanks, Peter

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • What is the best practice for adding persistence to an MVC model?

    - by etheros
    I'm in the process of implementing an ultra-light MVC framework in PHP. It seems to be a common opinion that the loading of data from a database, file etc. should be independent of the Model, and I agree. What I'm unsure of is the best way to link this "data layer" into MVC. Datastore interacts with Model //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo', $model); $data->loadBar(9); //loads data and populates Model $model->setBar('bar'); $data->save(); //reads data from Model and saves } Controller mediates between Model and Datastore Seems a bit verbose and requires the model to know that a datastore exists. //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo'); $model->setDataStore($data); $model->getDataStore->loadBar(9); //loads data and populates Model $model->setBar('bar'); $model->getDataStore->save(); //reads data from Model and saves } Datastore extends Model What happens if we want to save a Model extending a database datastore to a flatfile datastore? //controller public function update() { $model = $this->loadHybrid('foo'); //get_class == Datastore_Database $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } Model extends datastore This allows for Model portability, but it seems wrong to extend like this. Further, the datastore cannot make use of any of the Model's methods. //controller extends model public function update() { $model = $this->loadHybrid('foo'); //get_class == Model $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } EDIT: Model communicates with DAO //model public function __construct($dao) { $this->dao = $dao; } //model public function setBar($bar) { //a bunch of business logic goes here $this->dao->setBar($bar); } //controller public function update() { $model = $this->loadModel('foo'); $model->setBar('baz'); $model->save(); } Any input on the "best" option - or alternative - is most appreciated.

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • Save HashMap data into SQLite

    - by Matthew
    I'm Trying to save data from Json into SQLite. For now I keep the data from Json into HashMap. I already search it, and there's said use the ContentValues. But I still don't get it how to use it. I try looking at this question save data to SQLite from json object using Hashmap in Android, but it doesn't help a lot. Is there any option that I can use to save the data from HashMap into SQLite? Here's My code. MainHellobali.java // Hashmap for ListView ArrayList<HashMap<String, String>> all_itemList; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_helloballi); all_itemList = new ArrayList<HashMap<String, String>>(); // Calling async task to get json new getAllItem().execute(); } private class getAllItem extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... arg0) { // Creating service handler class instance ServiceHandler sh = new ServiceHandler(); // Making a request to url and getting response String jsonStr = sh.makeServiceCall(url, ServiceHandler.GET); Log.d("Response: ", "> " + jsonStr); if (jsonStr != null) { try { all_item = new JSONArray(jsonStr); // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); // tmp hashmap for single contact HashMap<String, String> allItem = new HashMap<String, String>(); // adding each child node to HashMap key => value allItem.put(TAG_ITEM_ID, item_id); allItem.put(TAG_CATEGORY_NAME, category_name); allItem.put(TAG_ITEM_NAME, item_name); // adding contact to contact list all_itemList.add(allItem); } } catch (JSONException e) { e.printStackTrace(); } } else { Log.e("ServiceHandler", "Couldn't get any data from the url"); } return null; } } I have DatabasehHandler.java and AllItem.java too. I can put it in here if its necessary. Thanks before ** Add Edited Code ** // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); DatabaseHandler databaseHandler = new DatabaseHandler(this); //error here "The Constructor DatabaseHandler(MainHellobali.getAllItem) is undefined }

    Read the article

  • C++0x: How can I access variadic tuple members by index at runtime?

    - by nonoitall
    I have written the following basic Tuple template: template <typename... T> class Tuple; template <uintptr_t N, typename... T> struct TupleIndexer; template <typename Head, typename... Tail> class Tuple<Head, Tail...> : public Tuple<Tail...> { private: Head element; public: template <uintptr_t N> typename TupleIndexer<N, Head, Tail...>::Type& Get() { return TupleIndexer<N, Head, Tail...>::Get(*this); } uintptr_t GetCount() const { return sizeof...(Tail) + 1; } private: friend struct TupleIndexer<0, Head, Tail...>; }; template <> class Tuple<> { public: uintptr_t GetCount() const { return 0; } }; template <typename Head, typename... Tail> struct TupleIndexer<0, Head, Tail...> { typedef Head& Type; static Type Get(Tuple<Head, Tail...>& tuple) { return tuple.element; } }; template <uintptr_t N, typename Head, typename... Tail> struct TupleIndexer<N, Head, Tail...> { typedef typename TupleIndexer<N - 1, Tail...>::Type Type; static Type Get(Tuple<Head, Tail...>& tuple) { return TupleIndexer<N - 1, Tail...>::Get(*(Tuple<Tail...>*) &tuple); } }; It works just fine, and I can access elements in array-like fashion by using tuple.Get<Index() - but I can only do that if I know the index at compile-time. However, I need to access elements in the tuple by index at runtime, and I won't know at compile-time which index needs to be accessed. Example: int chosenIndex = getUserInput(); cout << "The option you chose was: " << tuple.Get(chosenIndex) << endl; What's the best way to do this?

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • How to change the extension of a processed xml file (using eXist & cocoon)

    - by Carsten C.
    Hi all, I'm really new to this whole web stuff, so please be nice if I missed something important to post. Short: Is there a possibility to change the name of a processed file (eXist-DB) after serialization? Here my case, the following request to my eXist-db: http://localhost:8080/exist/cocoon/db/caos/test.xml and I want after serialization the follwing (xslt is working fine): http://localhost:8080/exist/cocoon/db/caos/test.html I'm using the followong sitemap.xmap with cocoon (hoping this is responsible for it) <map:match pattern="db/caos/**"> <!-- if we have an xpath query --> <map:match pattern="xpath" type="request-parameter"> <map:generate src="xmldb:exist:///db/caos/{../1}/#{1}"/> <map:act type="request"> <map:parameter name="parameters" value="true"/> <map:parameter name="default.howmany" value="1000"/> <map:parameter name="default.start" value="1"/> <map:transform type="filter"> <map:parameter name="element-name" value="result"/> <map:parameter name="count" value="{howmany}"/> <map:parameter name="blocknr" value="{start}"/> </map:transform> <map:transform src=".snip./webapp/stylesheets/db2html.xsl"> <map:parameter name="block" value="{start}"/> <map:parameter name="collection" value="{../../1}"/> </map:transform> </map:act> <map:serialize type="html" encoding="UTF-8"/> </map:match> <!-- if the whole file will be displayed --> <map:generate src="xmldb:exist:/db/caos/{1}"/> <map:transform src="..snip../stylesheets/caos2soac.xsl"> <map:parameter name="collection" value="{1}"/> </map:transform> <map:transform type="encodeURL"/> <map:serialize type="html" encoding="UTF-8"/> </map:match> So my Question is: How do I change the extension of the test.xml to test.html after processing the xml file? Background: I'm generating some information out of some xml-dbs, this infos will be displayed in html (which is working), but i want to change some entrys later, after I generated the html site. To make this confortable, I want to use Jquery & Jeditable, but the code does not work on the xml files. Saving the generated html is not an option. tia for any suggestions [and|or] help CC Edit: After reading all over: could it be, that the extension is irrelevant and that this is only a problem of port 8080? I'm confused...

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Problem in databinding a dictionary in ListView combo-box column.

    - by Ashish Ashu
    I have a listview of which itemsource is set to my custom collection, let's say MyCollection. The code below is not full code , it's just a code snippets to explain the problem. class Item : INotifyPropertyChanged { Options _options; public Options OptionProp { get { return _options; } set { _options = value; OnPropertyChanged ("OptionProp");} } string _Name; public string NameProp { get { return _Name; } set { _Name = value; OnPropertyChanged ("NameProp");} } } class Options : Dictionary<string,string> { public Options() { this.Clear(); this.Add("One" , "1" ); this.Add("Two" , "2" ); this.Add("Three" , "3" ); } } MyCollection in my viewModel class viewModel { ObservableCollection<Item> **MyCollection**; KeyValuePair<sting,string> **SelectedOption**; } The listview Item Source is set to my MyCollection. <ListView ItemSource = MyCollectoin> I Listview contains two columns of which I have defined a datatemplats in the listview. First column is a combo-box of which Itemsource is set to Options ( defined above ) Second column is a simple textblock to display Name. Problem 1. I have defined a datatemplate for first column in which I have a combo box , I have set the Itemsource =**MyCollection** and SelectedItem = SelectedOption of the combo-box. User can perform following operations in the listview: Add ( Add the row in the listview ) Move Up ( Move row up in the listview ) Move Down ( Move down the item in the listview ) .Now when I add the row in the listview , the combo-box selected index is always comes to -1 (first column). However the combo box contains options One, Two and Three. Also, I have initialized the SelectedOption to contain the first item, i:e One. problem 2. . Let suppose, I have added a single row in a listview and I have selected Option "one" in the combo box manually. Now when I perform Move Up or Move Down operations the selected index of a combo box is again set to -1. In the Move Up or Move Down operation , I am calling MoveUp and MoveDown methods of the Observable collection. Probelm 3 How to serialize the entire collection in XML. Since I can't serialize the Dictionary and KeyValue Pair. I have to restore the state of the listview.

    Read the article

  • Handling exceptions raised in observers

    - by sparky
    I have a Rails (2.3.5) application where there are many groups, and each Group has_many People. I have a Group edit form where users can create new people. When a new person is created, they are sent an email (the email address is user entered on the form). This is accomplished with an observer on the Person model. The problem comes when ActionMailer throws an exception - for example if the domain does not exist. Clearly that cannot be weeded out with a validation. There would seem to be 2 ways to deal with this: A begin...rescue...end block in the observer around the mailer. The problem with this is that the only way to pass any feedback to the user would be to set a global variable - as the observer is out of the MVC flow, I can't even set a flash[:error] there. A rescue_from in the Groups controller. This works fine, but has 2 problems. Firstly, there is no way to know which person threw the exception (all I can get is the 503 exception, no way to know which person caused the problem). This would be useful information to be able to pass back to the user - at the moment, there is no way for me to let them know which email address is the problem - at the moment, I just have to chuck the lot back at them, and issue an unhelpful message saying that one of them is not correct. Secondly (and to a certain extent this make the first point moot) it seems that it is necessary to call a render in the rescue_from, or it dies with a rather bizarre "can't convert Array into String" error from webbrick, with no stack trace & nothing in the log. Thus, I have to throw it back to the user when I come across the first error and have to stop processing the rest of the emails. Neither of the solutions are optimal. It would seem that the only way to get Rails to do what I want is option 1, and loathsome global variables. This would also rely on Rails being single threaded. Can anyone suggest a better solution to this problem?

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • ajaxSubmit options success & error functions aren't fired

    - by Thommy Tomka
    jQuery 1.7.2 jQuery Validate 1.1.0 jQuery Form 3.18 Wordpress 3.4.2 I am trying to code a contact/ mail form in above environment/ with above jQuery libs. Now I am having a problem with the jQuery Form JS: I have taken the original code from the developers page for ajaxSubmit and only altered the target option to an ID which exists in my HTML source and replaced $ with jQuery in function showRequest. The problem is, that the function namend after success: does not fire. I tried the same with error: and again nothing fired. Only complete: did and the function I placed there alerted the responseText from the receiving script. Does anyone has an idea whats going wrong? Thanks in advance! Thomas jQuery(document).ready(function() { var options = { target: '#mail-status', // target element(s) to be updated with server response beforeSubmit: showRequest, // pre-submit callback success: showResponse, // post-submit callback // other available options: //url: url // override for form's 'action' attribute //type: type // 'get' or 'post', override for form's 'method' attribute //dataType: null // 'xml', 'script', or 'json' (expected server response type) //clearForm: true // clear all form fields after successful submit //resetForm: true // reset the form after successful submit // $.ajax options can be used here too, for example: //timeout: 3000 }; jQuery("#mailform").validate( { submitHandler: function(form) { jQuery(form).ajaxSubmit(options); }, errorPlacement: function(error, element) { }, rules: { author: { minlength: 2, required: true }, email: { required: true, email: true }, comment: { minlength: 2, required: true } }, highlight: function(element) { jQuery(element).addClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").addClass("e"); }, unhighlight: function(element) { jQuery(element).removeClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").removeClass("e"); } }); }); // pre-submit callback function showRequest(formData, jqForm, options) { // formData is an array; here we use $.param to convert it to a string to display it // but the form plugin does this for you automatically when it submits the data var queryString = jQuery.param(formData); // jqForm is a jQuery object encapsulating the form element. To access the // DOM element for the form do this: // var formElement = jqForm[0]; alert('About to submit: \n\n' + queryString); // here we could return false to prevent the form from being submitted; // returning anything other than false will allow the form submit to continue return true; } // post-submit callback function showResponse(responseText, statusText, xhr, $form) { // for normal html responses, the first argument to the success callback // is the XMLHttpRequest object's responseText property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'xml' then the first argument to the success callback // is the XMLHttpRequest object's responseXML property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'json' then the first argument to the success callback // is the json data object returned by the server alert('status: ' + statusText + '\n\nresponseText: \n' + responseText + '\n\nThe output div should have already been updated with the responseText.'); }

    Read the article

  • Web image loaded by thread in android

    - by Bostjan
    I have an extended BaseAdapter in a ListActivity: private static class RequestAdapter extends BaseAdapter { and some handlers and runnables defined in it // Need handler for callbacks to the UI thread final Handler mHandler = new Handler(); // Create runnable for posting final Runnable mUpdateResults = new Runnable() { public void run() { loadAvatar(); } }; protected static void loadAvatar() { // TODO Auto-generated method stub //ava.setImageBitmap(getImageBitmap("URL"+pic)); buddyIcon.setImageBitmap(avatar); } In the getView function of the Adapter, I'm getting the view like this: if (convertView == null) { convertView = mInflater.inflate(R.layout.messageitem, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.username = (TextView) convertView.findViewById(R.id.username); holder.date = (TextView) convertView.findViewById(R.id.dateValue); holder.time = (TextView) convertView.findViewById(R.id.timeValue); holder.notType = (TextView) convertView.findViewById(R.id.notType); holder.newMsg = (ImageView) convertView.findViewById(R.id.newMsg); holder.realUsername = (TextView) convertView.findViewById(R.id.realUsername); holder.replied = (ImageView) convertView.findViewById(R.id.replied); holder.msgID = (TextView) convertView.findViewById(R.id.msgID_fr); holder.avatar = (ImageView) convertView.findViewById(R.id.buddyIcon); holder.msgPreview = (TextView) convertView.findViewById(R.id.msgPreview); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } and the image is getting loaded this way: Thread sepThread = new Thread() { public void run() { String ava; ava = request[8].replace(".", "_micro."); Log.e("ava thread",ava+", username: "+request[0]); avatar = getImageBitmap(URL+ava); buddyIcon = holder.avatar; mHandler.post(mUpdateResults); //holder.avatar.setImageBitmap(getImageBitmap(URL+ava)); } }; sepThread.start(); Now, the problem I'm having is that if there are more items that need to display the same picture, not all of those pictures get displayed. When you scroll up and down the list maybe you end up filling all of them. When I tried the commented out line (holder.avatar.setImageBitmap...) the app sometimes force closes with "only the thread that created the view can request...". But only sometimes. Any idea how I can fix this? Either option.

    Read the article

  • C# Property Access vs Interface Implementation

    - by ehdv
    I'm writing a class to represent a Pivot Collection, the root object recognized by Pivot. A Collection has several attributes, a list of facet categories (each represented by a FacetCategory object) and a list of items (each represented by a PivotItem object). Therefore, an extremely simplified Collection reads: public class Collection { private List<FacetCategory> categories; private List<PivotItem> items; // other attributes } What I'm unsure of is how to properly grant access to those two lists. Because declaration order of both facet categories and items is visible to the user, I can't use sets, but the class also shouldn't allow duplicate categories or items. Furthermore, I'd like to make the Collection object as easy to use as possible. So my choices are: Have Collection implement IList<PivotItem> and have accessor methods for FacetCategory: In this case, one would add an item to Collection foo by writing foo.Add(bar). This works, but since a Collection is equally both kinds of list making it only pass as a list for one type (category or item) seems like a subpar solution. Create nested wrapper classes for List (CategoryList and ItemList). This has the advantage of making a consistent interface but the downside is that these properties would no longer be able to serve as lists (because I need to override the non-virtual Add method I have to implement IList rather than subclass List. Implicit casting wouldn't work because that would return the Add method to its normal behavior. Also, for reasons I can't figure out, IList is missing an AddRange method... public class Collection { private class CategoryList: IList<FacetCategory> { // ... } private readonly CategoryList categories = new CategoryList(); private readonly ItemList items = new ItemList(); public CategoryList FacetCategories { get { return categories; } set { categories.Clear(); categories.AddRange(value); } } public ItemList Items { get { return items; } set { items.Clear(); items.AddRange(value); } } } Finally, the third option is to combine options one and two, so that Collection implements IList<PivotItem> and has a property FacetCategories. Question: Which of these three is most appropriate, and why?

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Best way to run remote VBScript in ASP.net? WMI or PsExec?

    - by envinyater
    I am doing some research to find out the best and most efficient method for this. I will need to execute remote scripts on a number of Window Servers/Computers (while we are building them). I have a web application that is going to automate this task, I currently have my prototype working to use PsExec to execute remote scripts. This requires PsExec to be installed on the system. A colleague suggested I should use WMI for this. I did some research in WMI and I couldn't find what I'm looking for. I want to either upload the script to the server and execute it and read the results, or already have the script on the server and execute it and read the results. I would prefer the first option though! Which is more ideal, PsExec or WMI? For reference, this is my prototype PsExec code. This script is only executing a small script to get the Windows OS and Service Pack Info. Protected Sub windowsScript(ByVal COMPUTERNAME As String) ' Create an array to store VBScript results Dim winVariables(2) As String nameLabel.Text = Name.Text ' Use PsExec to execute remote scripts Dim Proc As New System.Diagnostics.Process ' Run PsExec locally Proc.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") ' Pass in following arguments to PsExec Proc.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C c:\systemInfo.vbs" Proc.StartInfo.RedirectStandardInput = True Proc.StartInfo.RedirectStandardOutput = True Proc.StartInfo.UseShellExecute = False Proc.Start() ' Pause for script to run System.Threading.Thread.Sleep(1500) Proc.Close() System.Threading.Thread.Sleep(2500) 'Allows the system a chance to finish with the process. Dim filePath As String = COMPUTERNAME & "\TTO\somefile.txt" 'Download file created by script on Remote system to local system My.Computer.Network.DownloadFile(filePath, "C:\somefile.txt") System.Threading.Thread.Sleep(1000) ' Pause so file gets downloaded ''Import data from text file into variables textRead("C:\somefile.txt", winVariables) WindowsOSLbl.Text = winVariables(0).ToString() SvcPckLbl.Text = winVariables(1).ToString() System.Threading.Thread.Sleep(1000) ' ''Delete the file on server - we don't need it anymore Dim Proc2 As New System.Diagnostics.Process Proc2.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") Proc2.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C del c:\somefile.txt" Proc2.StartInfo.RedirectStandardInput = True Proc2.StartInfo.RedirectStandardOutput = True Proc2.StartInfo.UseShellExecute = False Proc2.Start() System.Threading.Thread.Sleep(500) Proc2.Close() ' Delete file locally File.Delete("C:\somefile.txt") End Sub

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >