Search Results

Search found 35003 results on 1401 pages for 'table variable'.

Page 388/1401 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Updating MS - Access fields through MS-Excel cells

    - by SpikETidE
    Hi everyone.... Consider that i have an excel workbook and an access table not necessarily having a similar structure (i.e. They may not have same number of columns) When i open the workbook the rows in the excel sheet get populated by the rows in access table (copied from the access table into the excel sheet's particular range of cells specified using macros). Then i modify certain cells in the excel sheet. I also have a button called "Save" in the excel sheet. When pressed, this will execute a macro. My question how can i update the access table to reflect the changes in the excel sheet when the save button is clicked...? Thanks for your time and suggestions...!

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Trying to modify a constraint in PostgresSQL

    - by MISMajorDeveloperAnyways
    Postgres is getting quite annoying lately. I have checked the documentation provided by Oracle and found a way to do this without dropping the table. Problem is, it errors out at modify as it does not recognize the keyword. Using EMS SQL Manager for PostgreSQL. Alter table public.public_insurer_credit MODIFY CONSTRAINT public_insurer_credit_fk1 deferrable, initially deferred; I was able to work around it by dropping the constraint using : ALTER TABLE "public"."public_insurer_credit" DROP CONSTRAINT "public_insurer_credit_fk1" RESTRICT; ALTER TABLE "public"."public_insurer_credit" ADD CONSTRAINT "public_insurer_credit_fk1" FOREIGN KEY ("branch_id", "order_id", "public_insurer_id") REFERENCES "public"."order_public_insurer"("branch_id", "order_id", "public_insurer_id") ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED;

    Read the article

  • Can I join between two MySQL tables stores on separate machines?

    - by CuriousCoder
    I have a relatively light query that needs information from a local MySQL table along with another MySQL table which is stored on a physically separate machine (on the same network). I'm keen to avoid setting up replication just to facilitate this light query that only needs executed once a day. Is there any way that I can join with a table on a remote machine using one query? Or run a SELECT INTO into a local table. Notes I'm using C# & .NET 4.

    Read the article

  • invoke sql function using nhibernate?

    - by net205
    I want to query like this: select * from table where concat(',', ServiceCodes, ',') like '%,33,%'; select * from table where (','||ServiceCodes||',') like '%,33,%'; so, I wrote this code: ICriteria cri = NHibernateSessionReader.CreateCriteria(typeof(ConfigTemplateList)); cri.Add(Restrictions.Like(Projections.SqlFunction("concat", NHibernateUtil.String, Projections.Property("ServiceCodes")), "%,33,%")); I get sql similar : select * from table where (ServiceCodes) like '%,33,%'; But it is not what I want,how to do it??? thanks!

    Read the article

  • LINQ to SQL for tables across databases. Or View?

    - by BritishDeveloper
    I have a Message table and a User table. Both are in separate databases. There is a userID in the Message table that is used to join to the User table to find things like userName. How can I create this in LINQ to SQL? I can't seem to do a cross database join. Should I create a View in the database and use that instead? Will that work? What will happen to CRUD against it? E.g. if I delete a message - surely it won't delete the user? I'd imagine it would throw an error. What to do? I can't move the tables into the same database!

    Read the article

  • representing graph using database

    - by prosseek
    I need to represent graph information with database. Let's say, a is connected to b, c, and d. a -- b |_ c |_ d I can have a node table for a, b, c, and d, and I can also have a link table (FROM, TO) - (a,b), (a,c), (a,d). For other implementation there might be a way to store the link info as (a,b,c,d), but the number of elements in the table is variable. Q1 : Is there a way to represent variable elements in a table? Q2 : Is there any general way to represent the graph structure using database?

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • Unable to save data in database manually and get latest auto increment id, cakePHP

    - by shabby
    I have checked this question as well and this one as well. I am trying to implement the model described in this question. What I want to do is, on the add function of message controller, create a record in thread table(this table only has 1 field which is primary key and auto increment), then take its id and insert it in the message table along with the user id which i already have, and then save it in message_read_state and thread_participant table. This is what I am trying to do in Thread Model: function saveThreadAndGetId(){ //$data= array('Thread' => array()); $data= array('id' => ' '); //Debugger::dump(print_r($data)); $this->save($data); debug('id: '.$this->id); $threadId = $this->getInsertID(); debug($threadId); $threadId = $this->getLastInsertId(); debug($threadId); die(); return $threadId; } $data= array('id' => ' '); This line from the above function adds a row in the thread table, but i am unable to retrieve the id. Is there any way I can get the id, or am I saving it wrongly? Initially I was doing the query thing in the message controller: $this->Thread->query('INSERT INTO threads VALUES();'); but then i found out that lastId function doesnt work on manual queries so i reverted.

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • NHibernate + Cannot insert the value NULL into...

    - by mybrokengnome
    I've got a MS-SQL database with a table created with this code CREATE TABLE [dbo].[portfoliomanager]( [idPortfolioManager] [int] NOT NULL PRIMARY KEY IDENTITY, [name] [varchar](45) NULL ) so that idPortfolioManager is my primary key and also auto-incrementing. Now on my Windows WPF application I'm using NHibernate to help with adding/updating/removing/etc. data from the database. Here is the class that should be connecting to the portfoliomanager table namespace PortfolioManager { [Class(Table="portfoliomanager",NameType=typeof(PortfolioManagerClass))] public class PortfolioManagerClass { [Id(Name = "idPortfolioManager")] [Generator(1, Class = "identity")] public virtual int idPortfolioManager { get; set; } [NHibernate.Mapping.Attributes.Property(Name = "name")] public virtual string name { get; set; } public PortfolioManagerClass() { } } } and some short code to try and insert something PortfolioManagerClass portfolio = new PortfolioManagerClass(); Portfolio.name = "Brad's Portfolios"; The problem is, when I try running this, I get this error: {System.Data.SqlClient.SqlException: Cannot insert the value NULL into column 'idPortfolioManager', table 'PortfolioManagementSystem.dbo.portfoliomanager'; column does not allow nulls. INSERT fails. The statement has been terminated... with an outer exception of {"could not insert: [PortfolioManager.PortfolioManagerClass][SQL: INSERT INTO portfoliomanager (name) VALUES (?); select SCOPE_IDENTITY()]"} I'm hoping this is the last error I'll have to solve with NHibernate just to get it to do something, it's been a long process. Just as a note, I've also tried setting Class="native" and unsaved-value="0" with the same error. Thanks! Edit: Ok removing the 1, from Generator actually allows the program to run (not sure why that was even in the samples I was looking at) but it actually doesn't get added to the database. I logged in to the server and ran the sql server profiler tool and I never see the connection coming through or the SQL its trying to run, but NHibernate isn't throwing an error anymore. Starting to think it would be easier to just write SQL statements myself :(

    Read the article

  • SQL: Join Parent - Child tables

    - by pray4Mojo
    I'm building a simple review website application and need some help with SQL Query. There are 3 tables (Topics, Comments, Users). I need a SQL query to select the data from all 3 tables. The 'Topics' table is the parent and the 'Comments' table contains the child records (anywhere from zero to 100 records per parent. The third table 'Users' contains the user information for all users. Here are the fields for the 3 tables: Topics (topicID, strTopic, userID) Comments (commentID, topicID, strComment, userID) Users (userID, userName) I tried: SELECT * FROM Topics Inner Join Comments ON Topics.topicID = Comments.topicID Inner Join Users ON Topics.userID = Users.userID But this does not work correctly because there are multiple topics and the User info is not joined to the Comments table. Any help would be appreciated.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • SQL Server 2008 vs 2005 udf xml perfomance problem.

    - by user344495
    Ok we have a simple udf that takes a XML integer list and returns a table: CREATE FUNCTION [dbo].[udfParseXmlListOfInt] ( @ItemListXml XML (dbo.xsdListOfInteger) ) RETURNS TABLE AS RETURN ( --- parses the XML and returns it as an int table --- SELECT ListItems.ID.value('.','INT') AS KeyValue FROM @ItemListXml.nodes('//list/item') AS ListItems(ID) ) In a stored procedure we create a temp table using this UDF INSERT INTO @JobTable (JobNumber, JobSchedID, JobBatID, StoreID, CustID, CustDivID, BatchStartDate, BatchEndDate, UnavailableFrom) SELECT JOB.JobNumber, JOB.JobSchedID, ISNULL(JOB.JobBatID,0), STO.StoreID, STO.CustID, ISNULL(STO.CustDivID,0), AVL.StartDate, AVL.EndDate, ISNULL(AVL.StartDate, DATEADD(day, -8, GETDATE())) FROM dbo.udfParseXmlListOfInt(@JobNumberList) TMP INNER JOIN dbo.JobSchedule JOB ON (JOB.JobNumber = TMP.KeyValue) INNER JOIN dbo.Store STO ON (STO.StoreID = JOB.StoreID) INNER JOIN dbo.JobSchedEvent EVT ON (EVT.JobSchedID = JOB.JobSchedID AND EVT.IsPrimary = 1) LEFT OUTER JOIN dbo.Availability AVL ON (AVL.AvailTypID = 5 AND AVL.RowID = JOB.JobBatID) ORDER BY JOB.JobSchedID; For a simple list of 10 JobNumbers in SQL2005 this returns in less than 1 second, in 2008 this run against the exact same data returns in 7 min. This is on a much faster machine with more memory. Any ideas?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • Custom UITableViewCell from xib isn't displaying properly

    - by Kenny Wyland
    I've created custom UITableCells a bunch of times and I've never run into this problem, so I'm hoping you can help me find the thing I've missed or messed up. When I run my app, the cells in my table view appear to be standard cells with Default style. I have SettingsTableCell which is a subclass of UITableViewCell. I have a SettingsTableCell.xib which contains a UITableViewCell and inside that are a couple labels and a textfield. I've set the class type in the xib to be SettingsTableCell and the File's Owner of the xib to my table controller. My SettingsTableController has an IBOutlet property named tableCell. My cellForRowAtIndexPath contains the following code to load my table view xib and assign it to my table controller's tableCell property: static NSString *CellIdentifier = @"CellSettings"; SettingsTableCell *cell = (SettingsTableCell*)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { [[NSBundle mainBundle] loadNibNamed:@"SettingsTableCell" owner:self options:nil]; cell = self.tableCell; self.tableCell = nil; NSLog(@"cell=%@", cell); } This is what my xib set up looks like in IB: When I run my app, the table displays as if all of the cells are standard Default style cells though: The seriously weird part is though... if I tap on the area of the cell where the textfield SHOULD be, the keyboard does come up! The textfield isn't visible, there's no cursor or anything like that... but it does respond. The visible UILabel is obviously not the UILabel from my xib though because the label in my xib is right justified and the one showing in the app is left justified. I'm incredibly confused about how this is happening. Any help is appreciated.

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Problems inserting file data into sqlite database using python

    - by tylerc230
    I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?

    Read the article

  • Doctrine 1.2 Column Naming Conventions for Many To Many Relationships

    - by Alan Storm
    I'm working with an existing database schema, and trying to setup two Doctrine models with a Many to Many relationship, as described in this document When creating tables from scratch, I have no trouble getting this working. However, the existing join tables use a different naming convention that what's described in the Doctrine document. Specifically Table 1 -------------------------------------------------- table_1_id ....other columns.... Table 2 -------------------------------------------------- table_2_id ....other columns.... Join Table -------------------------------------------------- fktable1_id fktable_2_id Basically, the previous developers prefaced all forign keys with an fk. From the examples I've seen and some brief experimenting with code, it appears that Doctrine 1.2 requires that the join table use the same column names as the tables it's joining in Is my assumption correct? If so, has the situation changed in Doctrine 2? If the answers to either of the above are true, how do you configure the models so that all the columns "line up"

    Read the article

  • Writing an auto-memoizer in Scheme. Help with macro and a wrapper.

    - by kunjaan
    I am facing a couple of problems while writing an auto-memoizer in Scheme. I have a working memoizer function, which creats a hash table and checks if the value is already computed. If it has been computed before then it returns the value else it calls the function. (define (memoizer fun) (let ((a-table (make-hash))) (?(n) (define false-if-fail (?() #f)) (let ((return-val (hash-ref a-table n false-if-fail))) (if return-val return-val (begin (hash-set! a-table n (fun n)) (hash-ref a-table n))))))) Now I want to create a memoize-wrapper function like this: (define (memoize-wrapper function) (set! function (memoizer function))) And hopefully create a macro called def-memo which defines the function with the memoize-wrapper. eg. the macro could expand to (memoizer (define function-name arguments body ...) or something like that. So that I should be able to do : (def-memo (factorial n) (cond ((= n 1) 1) (else (* n (factorial (- n 1)))))) which should create a memoized version of the factorial instead of the normal slow one. My problem is that the The memoize-wrapper is not working properly, it doesnt call the memoized function but the original function. I have no idea how to write a define inside of the macro. How do I make sure that I can get variable lenght arguments and variable length body? How do I then define the function and wrap it around with the memoizer? Thanks a lot.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >