Search Results

Search found 13415 results on 537 pages for 'variable caching'.

Page 478/537 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Computing, storing, and retrieving values to and from an N-Dimensional matrix

    - by Adam S
    This question is probably quite different from what you are used to reading here - I hope it can provide a fun challenge. Essentially I have an algorithm that uses 5(or more) variables to compute a single value, called outcome. Now I have to implement this algorithm on an embedded device which has no memory limitations, but has very harsh processing constraints. Because of this, I would like to run a calculation engine which computes outcome for, say, 20 different values of each variable and stores this information in a file. You may think of this as a 5(or more)-dimensional matrix or 5(or more)-dimensional array, each dimension being 20 entries long. In any modern language, filling this array is as simple as having 5(or more) nested for loops. The tricky part is that I need to dump these values into a file that can then be placed onto the embedded device so that the device can use it as a lookup table. The questions now, are: What format(s) might be acceptable for storing the data? What programs (MATLAB, C#, etc) might be best suited to compute the data? C# must be used to import the data on the device - is this possible given your answer to #1?

    Read the article

  • Saving form values to database after a user logs in

    - by redfalcon
    Hi. We have a form with ratings to submit for a certain restaurant. After the user has entered some values and wants to submit them, we check whether the user is logged in or not. If not, we display a login form and let the user put in his account data and redirect him to the restaurant he wanted to submit a rating for. The problem is, that after he successfully logged in himself, the submitted values are not saved to the database (which works fine if the user is already logged in). So I wondered if it is possible, to somehow save the data although the user is not logged in. I thought of maybe saving the filled values in a variable and have then automatically re-entered after we redirected the user. But I guess this wont work because we use before_filter :login_required, :only => [ :create ] So we couldnt even access the filled in values, since we display the login-form before the method has processed the values in the form, right? Any idea how we can make rails to save the values or at least have them automatically re-entered to the form? Thanks!

    Read the article

  • Finding and marking the largest of three values in a two dimensional array

    - by DavidYell
    I am working on a display screen for our office, and I can't seem to think of a good way to find the largest numerical value in a set of data in a two dimensional array. I've looked at using max() and also asort() but they don't seem to cope with a two dimensional array. I'm returning my data through our mysql class, so the rows are returned in a two dimensional array. Array( [0] => Array( [am] => 12, [sales] => 981), [1] => Array( [am] => 43, [sales] => 1012), [2] => Array( [am] => 17, [sales] => 876) ) I need to output a class when foreaching the data in my table for the AM with the highest sales value. Short of comparing them all in if statements. I have tried to get max() on the array, but it returns an array, as it's look within the dimension. When pointing it at a specific dimension it returns the key not the value. I figured that I could asort() the array and pop the top value off, store it in a variable and then compare against that in my foreach() loop, but that seems to have trouble sorting across two dimensions. Lastly, I figured that I could foreach() the values, comparing them against the previous one each time, untill I found the largest. This approach however means storing every value, luckily only three, but then comparing against them all again. Surely there must be a simpler way to achieve this, short of converting it into a single dimension array, then doing an asort() on that?

    Read the article

  • Trouble tunneling my local Wordpress install to the mysql database on appfog

    - by alanmoo
    I've set up a wordpress install on appfog (using rackspace), and cloned the install to my local machine for development. I know the install works (using MAMP) because I created a local mysql database and changed wp-config.php to point to it. However, I want to develop without having to change wp-config.php every time I commit. After doing some research, it seems like the Appfog service Caldecott lets me tunnel into the mysql database on the server, using af tunnel. Unfortunately, I'm having issues with getting it working. Even if I change my MAMP mysql port to something like 8889, and tunnel mysql through port 3306, it looks like it's connected but I still get "Error establishing a database connection" when loading my localhost Wordpress. When I quit the mysql monitor (using ctrl+x, ctrl+c), I get a message stating "Error: 'mysql' execution failed; is it in your $PATH?'. Originally, no, it wasn't, but I've fixed my PATH variable on my local machine so that when I go to Terminal and just type mysql, it loads up. So I guess my question is 2 parts: 1.)Am I going with the right approach for Wordpress development on my local machine and 2.)If so, why is the tunnel not working?

    Read the article

  • How to support comparisons for QVariant objects containing a custom type?

    - by Tyler McHenry
    According to the Qt documentation, QVariant::operator== does not work as one might expect if the variant contains a custom type: bool QVariant::operator== ( const QVariant & v ) const Compares this QVariant with v and returns true if they are equal; otherwise returns false. In the case of custom types, their equalness operators are not called. Instead the values' addresses are compared. How are you supposed to get this to behave meaningfully for your custom types? In my case, I'm storing an enumerated value in a QVariant, e.g. In a header: enum MyEnum { Foo, Bar }; Q_DECLARE_METATYPE(MyEnum); Somewhere in a function: QVariant var1 = QVariant::fromValue<MyEnum>(Foo); QVariant var2 = QVariant::fromValue<MyEnum>(Foo); assert(var1 == var2); // Fails! What do I need to do differently in order for this assertion to be true? I understand why it's not working -- each variant is storing a separate copy of the enumerated value, so they have different addresses. I want to know how I can change my approach to storing these values in variants so that either this is not an issue, or so that they do both reference the same underlying variable. It don't think it's possible for me to get around needing equality comparisons to work. The context is that I am using this enumeration as the UserData in items in a QComboBox and I want to be able to use QComboBox::findData to locate the item index corresponding to a particular enumerated value.

    Read the article

  • Is this a safe way to release resources in Java?

    - by palto
    Usually when code needs some resource that needs to be released I see it done like this: InputStream in = null; try{ in = new FileInputStream("myfile.txt"); doSomethingWithStream(in); }finally{ if(in != null){ in.close(); } } What I don't like is that you have to initialize the variable to null and after that set it to another value and in the finally block check if the resource was initialized by checking if it is null. If it is not null, it needs to be released. I know I'm nitpicking, but I feel like this could be done cleaner. What I would like to do is this: InputStream in = new FileInputStream("myfile.txt"); try{ doSomethingWithStream(in); }finally{ in.close(); } To my eyes this looks almost as safe as the previous one. If resource initialization fails and it throws an exception, there's nothing to be done(since I didn't get the resource) so it doesn't have to be inside the try block. The only thing I'm worried is if there is some way(I'm not Java certified) that an exception or error can be thrown between operations? Even simpler example: Inputstream in = new FileInputStream("myfile.txt"); in.close(); Is there any way the stream would be left open that a try-finally block would prevent?

    Read the article

  • google app engine atomic section???

    - by bokertov
    hi, Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false'). how do i ensure that another user doesn't set the reserved is still false? I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop. Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment. (btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution) Any idea on how to solve??? (here's the google solution: http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups look at: Key k = KeyFactory.createKey("Employee", "k12345"); Employee e = pm.getObjectById(Employee.class, k); e.counter += 1; pm.makePersistent(e); This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data. THANKS!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How do I set up the python/c library correctly?

    - by Bartvbl
    I have been trying to get the python/c library to like my mingW compiler. The python online doncumentation; http://docs.python.org/c-api/intro.html#include-files only mentions that I need to import the python.h file. I grabbed it from the installation directory (as is required on the windows platform), and tested it by compiling the script: #include "Python.h". This compiled fine. Next, I tried out the snippet of code shown a bit lower on the python/c API page: PyObject *t; t = PyTuple_New(3); PyTuple_SetItem(t, 0, PyInt_FromLong(1L)); PyTuple_SetItem(t, 1, PyInt_FromLong(2L)); PyTuple_SetItem(t, 2, PyString_FromString("three")); For some reason, the compiler would compile the code if I'd remove the last 4 lines (so that only the pyObject variable definition would be left), yet calling the actual constructor of the tuple returned errors. I am probably missing something completely obvious here, given I am very new to C, but does anyone know what it is?

    Read the article

  • Android Unit Testing - Resolution & Verification Problems

    - by Bill
    I just switched the way my Android project is being built and non of my unit tests work any more...I get errors like WARN/dalvikvm(575): VFY: unable to resolve static field X in ..... WARN/dalvikvm(575): VFY: unable to find class referenced in signature These errors only come from my Unit Tests, where classes defined in it can't even see other classes defined in the unit test. Before each project had its own directory with copies of the 3rd party jar files. I've read around that Dex does weird things with references but haven't been able to figure out how to fix this problem. Is there a better way to do this? I would love to see an example of a large Android workspace where there are multiple projects, jar references, etc... Is it possible to fix this with an Order/Export tweak ? The project is structured like this: Eclipse Workspace (PROJECT_HOME classpath variable) lib 3rd-party jars android.jar Java Project A Looks in PROJECT_HOME Java Project B Looks in PROJECT_HOME Depends on project A Android Project Depends on A & B Looks in PROJECT_HOME Android Test Project Depends on A , B, Android Project Looks in PROJECT_HOME

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • Android Bluetooth syncing

    - by Darryl
    I am connecting to a bluetooth enabled camera, and I am able to connect using the methods found in the BluetoothChat example. I need to send commands to the camera. The issue is that I also need to get a response BACK from the camera after I send the command in the first place. So basically I need to write a command and receive a response. However, the thing is that the commands sometimes don't generate a response. Even the documentation on the camera says that you "have to send the sync command as many as 25 times on power up before you will get a response." So I cannot just write a command and wait for a response, as the "read" function blocks the thread. If I have the read function in another thread, like the bluetooth chat example, there seems to be sync issues, i.e., if I issue a write command, how can I know that it is reading if that is happening in another thread? I did set a global variable to check for, but this seems "iffy" at best. So basically I need to write to the bluetooth and then attempt to read from it. However, I need to let that read timeout and if I haven't received a response, I need to write again until I get a response (or until it's tried a set number of times). I don't need the read function to be going all the time in the background. Any ideas? Thanks in advance for your time.

    Read the article

  • Inheritance of TCollectionItem

    - by JamesB
    I'm planning to have collection of items stored in a TCollection. Each item will derive from TBaseItem which in turn derives from TCollectionItem, With this in mind the Collection will return TBaseItem when an item is requested. Now each TBaseItem will have a Calculate function, in the the TBaseItem this will just return an internal variable, but in each of the derivations of TBaseItem the Calculate function requires a different set of parameters. The Collection will have a Calculate All function which iterates through the collection items and calls each Calculate function, obviously it would need to pass the correct parameters to each function I can think of three ways of doing this: Create a virtual/abstract method for each calculate function in the base class and override it in the derrived class, This would mean no type casting was required when using the object but it would also mean I have to create lots of virtual methods and have a large if...else statement detecting the type and calling the correct "calculate" method, it also means that calling the calculate method is prone to error as you would have to know when writing the code which one to call for which type with the correct parameters to avoid an Error/EAbstractError. Create a record structure with all the possible parameters in and use this as the parameter for the "calculate" function. This has the added benefit of passing this to the "calculate all" function as it can contain all the parameters required and avoid a potentially very long parameter list. Just type casting the TBaseItem to access the correct calculate method. This would tidy up the TBaseItem quite alot compared to the first method. What would be the best way to handle this collection?

    Read the article

  • Flash Player: Any remedy for the stale video image data problem (in a reused NetStream object)?

    - by amn
    Has anyone experienced stale stills of a previous playback for a reused NetStream object? If so, what are the workarounds for this, except re-creating the object (which eats performance and time)? It is hard to reuse NetStream objects because of a (in my opinion) fundamental issue with NetStream objects - when you 'close' a playing stream and at a later point issue a 'play' call on it again with a different name, the stream appears to still contain a stale image lingering from previous playback, and this is of course displayed in the Video object for a moment - the moment I assume it takes for new stream data to become available from server. Because of this behavior, to improve my users' visual experience, I simply discard a NetStream object after a playback session, and assign a new NetStream object to the same variable, set it up, and play something else. It appears to work - no stale image - but what bugs me is that it's a work around and costs performance (construction and setting up the object again - event listeners and 'client' delegates and more memory usage - NetStream objects are not garbage collected immediately, it takes some time). It would be really nice to REALLY be able to reuse a stream. I am thinking of something akin to Video.clear method, but for the NetStream class. Am I missing something?

    Read the article

  • JDO architecture: One to many relationship and cascading deleting

    - by user361897
    I’m new to object oriented database designs and I’m trying to understand how I should be structuring my classes in JDO for google app engine, particularly one to many relationships. Let’s say I’m building a structure for a department store where there are many departments, and each department has many products. So I’d want to have a class called Department, with a variable that is a list of a Product class. @PersistenceCapable public class Department { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private String deptID; @Persistent private String departmentName; @Persistent private List<Product>; } @PersistenceCapable public class Product { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private String productID; @Persistent private String productName; } But one Product can be in more than one Department (like a battery could be in electronics and household supplies). So the next question is, how do I not duplicate data in the OOD world and have only one copy of product data in numerous departments? And the next question is, let’s say I delete out a particular product, how do each of the departments know it was deleted?

    Read the article

  • initialise a var in scala

    - by user unknown
    I have a class where I like to initialize my var by reading a configfile, which produces intermediate objects/vals, which I would like to group and hide in a method. Here is the bare minimum of the problem - I call the ctor with a param i, in reality a File to parse, and the init-method generates the String s, in reality more complicated than here, with a lot of intermediate objects being created: class Foo (val i: Int) { var s : String; def init () { s = "" + i } init () } This will produce the error: class Foo needs to be abstract, since variable s is not defined. In this example it is easy to solve by setting the String to "": var s = "";, but in reality the object is more complex than String, without an apropriate Null-implementation. I know, I can use an Option, which works for more complicated things than String too: var s : Option [String] = None def init () { s = Some ("" + i) } or I can dispense with my methodcall. Using an Option will force me to write Some over and over again, without much benefit, since there is no need for a None else than to initialize it that way I thought I could. Is there another way to achieve my goal?

    Read the article

  • In AS3/Flex, how can I get from flat data to hierarchical data?

    - by Dave S
    I have some data that gets pulled out of a database and mapped to an arraycollection. This data has a field called parentid, and I would like to map the data into a new arraycollection with hierarchical information to then feed to an advanced data grid. I think I'm basically trying to take the parent object, add a new property/field/variable of type ArrayCollection called children and then remove the child object from the original list and clone it into the children array? Any help would be greatly appreciated, and I apologize ahead of time for this code: private function PutChildrenWithParents(accountData : ArrayCollection) : ArrayCollection{ var pos_inner:int = 0; var pos_outer:int = 0; while(pos_outer < accountData.length){ if (accountData[pos_outer].ParentId != null){ pos_inner = 0; while(pos_inner < accountData.length){ if (accountData[pos_inner].Id == accountData[pos_outer].ParentId){ accountData.addItemAt( accountData[pos_inner] + {children:new ArrayCollection(accountData[pos_outer])}, pos_inner ); accountData.removeItemAt(pos_outer); accountData.removeItemAt(pos_inner+1); } pos_inner++; } } pos_outer++; } return accountData; }

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • Iphone Core Data Internal Inconsistency

    - by kiyoshi
    This question has something to do with the question I posted here: http://stackoverflow.com/questions/1230858/iphone-core-data-crashing-on-save however the error is different so I am making a new question. Now I get this error when trying to insert new objects into my managedObjectContext: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '"MailMessage" is not a subclass of NSManagedObject.' But clearly it is: @interface MailMessage : NSManagedObject { .... And when I run this code: NSManagedObjectModel *managedObjectModel = [[self.managedObjectContext persistentStoreCoordinator] managedObjectModel]; NSEntityDescription *entity =[[managedObjectModel entitiesByName] objectForKey:@"MailMessage"]; NSManagedObject *newObject = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext]; It runs fine when I do not present an MFMailComposeViewController, but if I run this code in the - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { method, it throws the above error when creating the newObject variable. The entity object when I use print object produces the following: (<NSEntityDescription: 0x1202e0>) name MailMessage, managedObjectClassName MailMessage, renamingIdentifier MailMessage, isAbstract 0, superentity name (null), properties { in both cases, so I don't think the managedObjectContext is completely invalid. I have no idea why it would say MailMessage is not a subclass of NSManagedObject at that point, and not at the other. Any help would be appreciated, thanks in advance.

    Read the article

  • Return a Const Char* by reading an @property NSString in separate class

    - by Andrew
    I'm probably being an idiot here, but I cannot for the life of me find the answer that I'm looking for. I have an array of CalEvents returned from a CalendarStore query, and for other reasons I am finding the first location of any upcoming event for today that is not an all-day or multi-day event. +(const char*) suggestFirstiCalLocation{ CalCalendarStore *store = [CalCalendarStore defaultCalendarStore]; NSPredicate *allEventsPredicate = [CalCalendarStore eventPredicateWithStartDate:[NSDate date] endDate:[[NSDate date] initWithTimeIntervalSinceNow:3600] calendars:[store calendars]]; NSArray *currentEventCalendarArray = [store eventsWithPredicate:allEventsPredicate]; for (int i = 0; i< [currentEventCalendarArray count]; i++){ if (![[currentEventCalendarArray objectAtIndex:i] isAllDay]){ //Now that other events are cleared, check for multi-day NSDate *startOnDate = [[currentEventCalendarArray objectAtIndex:i] startDate]; NSDate *endOnDate = [[currentEventCalendarArray objectAtIndex:i] endDate]; if ([endOnDate timeIntervalSinceDate:startOnDate ] < 86400.0){ NSString * iCalLocation = [[currentEventCalendarArray objectAtIndex:i] location]; return [iCalLocation UTF8String]; } } } return ""; } For other reasons, I am returning a const char with the value of the location that is found. However, I cannot seem to return "iCalLocation" at all. The compiler fails on the line where I am initializing the "iCalLocation" variable: "Cannot convert to pointer type" Being frank: I am new to Objective-C, and I am still trying to figure points, properties, and such out.

    Read the article

  • switch statemt functioning improperly and giving error when i put break;

    - by nav
    The following is the code i used in a program - over here the month variable is an integer switch(month) { case 1: case 3: case 5: case 7: case 8: case 10: case 12: return 31; break; case 2: return 28; break; case 4: case 6: case 9: case 11: return 30; break; default: System.out.println("Invalid month."); return 0; } surprisingly, when i use the above switch construct.. it gives an error saying.. code unreachable for statements after each break statement Then i removed all the break statements, and the new code looks like this --- switch(month) { case 1: case 3: case 5: case 7: case 8: case 10: case 12: return 31; case 2: return 28; case 4: case 6: case 9: case 11: return 30; default: System.out.println("Invalid month."); return 0; } Now.. after removing the break statements .. the code worked perfectly well.. My question is... in the switch construct.. it is mandatory to use break.. or else the control flow continued.. and all the conditions are tested and executed!! right??? So why in the world is the previous ** syntactically Right** version giving an error.. and the modified syntactically incorrect version running perfectly well.. Any explanation.. anyone!!

    Read the article

  • How to check if a position inside a std string exists ?? (c++)

    - by yox
    Hello, i have a long string variable and i want to search in it for specific words and limit text according to thoses words. Say i have the following text : "This amazing new wearable audio solution features a working speaker embedded into the front of the shirt and can play music or sound effects appropriate for any situation. It's just like starring in your own movie" and the words : "solution" , "movie". I want to substract from the big string (like google in results page): "...new wearable audio solution features a working speaker embedded..." and "...just like starring in your own movie" for that i'm using the code : for (std::vector<string>::iterator it = words.begin(); it != words.end(); ++it) { int loc1 = (int)desc.find( *it, 0 ); if( loc1 != string::npos ) { while(desc.at(loc1-i) && i<=80){ i++; from=loc1-i; if(i==80) fromdots=true; } i=0; while(desc.at(loc1+(int)(*it).size()+i) && i<=80){ i++; to=loc1+(int)(*it).size()+i; if(i==80) todots=true; } for(int i=from;i<=to;i++){ if(fromdots) mini+="..."; mini+=desc.at(i); if(todots) mini+="..."; } } but desc.at(loc1-i) causes OutOfRange exception... I don't know how to check if that position exists without causing an exception ! Help please!

    Read the article

  • Using HTML::Template within a value attribute

    - by Zerobu
    Hello, my question is how would I use an HTML::Template tag inside a value of form to change that form. For example <table border="0" cellpadding="8" cellspacing="1"> <tr> <td align="right">File:</td> <td> <input type="file" name="upload" value= style="width:400px"> </td> </tr> <tr> <td align="right">File Name:</td> <td> <input type="text" name="filename" style="width:400px" value="" > </td> </tr> <tr> <td align="right">Title:</td> <td> <input type="text" name="title" style="width:400px" value="" /> </td> </tr> <tr> <td align="right">Date:</td> <td> <input type="text" name="date" style="width:400px" value="" /> </td> </tr> <tr> <td colspan="2" align="right"> <input type="button" value="Cancel"> <input type="submit" name="action" value="Upload" /> </td> </tr> </table> I want the value to have a variable in it.

    Read the article

  • How to use Node.js to build pages that are a mix between static and dynamic content?

    - by edt
    All pages on my 5 page site should be output using a Node.js server. Most of the page content is static. At the bottom of each page, there is a bit of dynamic content. My node.js code currently looks like: var http = require('http'); http.createServer(function (request, response) { console.log('request starting...'); response.writeHead(200, { 'Content-Type': 'text/html' }); var html = '<!DOCTYPE html><html><head><title>My Title</title></head><body>'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some dynamic content'; html += '</body></html>'; response.end(html, 'utf-8'); }).listen(38316); I'm sure there are numerous things wrong about this example. Please enlighten me! For example: How can I add static content to the page without storing it in a string as a variable value with += numerous times? What is the best practices way to build a small site in Node.js where all pages are a mix between static and dynamic content?

    Read the article

  • Unable to write to a text file

    - by chrissygormley
    Hello, I am running some tests and need to write to a file. When I run the test's the open = (file, 'r+') does not write to the file. The test script is below: class GetDetailsIP(TestGet): def runTest(self): self.category = ['PTZ'] try: # This run's and return's a value result = self.client.service.Get(self.category) mylogfile = open("test.txt", "r+") print >>mylogfile, result result = ("".join(mylogfile.readlines()[2])) result = str(result.split(':')[1].lstrip("//").split("/")[0]) mylogfile.close() except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals(): self.assertEquals(result, self.camera_ip) else: assert False When this test run's, no value has been entered into the text file and a value is returned in the variable result. I havw also tried mylogfile.write(result). If the file does not exist is claim's the file does not exist and doesn't create one. Could this be a permission problem where python is not allowed to create a file? I have made sure that all other read's to this file are closed so I the file should not be locked. Can anyone offer any suggestion why this is happening? Thanks

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >