Search Results

Search found 34758 results on 1391 pages for 'linear linked list invert'.

Page 4/1391 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • linked-server sql - access

    - by user22121
    Hi, I have a SQL server 2000 and an Access database mdb connected by Linked server on the other hand I have a program in c # that updates data in a SQL table (Users) based data base access. When running my program returns the following error message: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. Authentication failed. [OLE / DB provider returned message: Can not start the application. Missing information file of the working group or is opened exclusively by another user.] OLE DB error trace [OLE / DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize:: Initialize returned 0x80040E4D: Authentication failed.] . Both the program, the sql server and database access are on a remote server. On the local server the problem was solved by running the following: "sp_addlinkedsrvlogin 'ActSC', 'false', NULL, 'admin', NULL". Try on the remote server the next, without result: "sp_addlinkedsrvlogin 'ActSC', true, null, 'user', 'pass'". On the remote server and from the "Query Analyzer" sql update statements are working correctly. Can you think of what may be the problem? Thanks!

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • dpkg: warning: files list file for package 'x' missing

    - by Mark
    I get this warning for several packages every time I install any package or perform apt-get upgrade. Not sure what is causing it; it's a fresh Debian install on my OpenVZ server and I haven't changed any dpkg settings. Here's an example: root@debian:~# apt-get install cowsay Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: filters The following NEW packages will be installed: cowsay 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 21.9 kB of archives. After this operation, 91.1 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian/ unstable/main cowsay all 3.03+dfsg1-4 [21.9 kB] Fetched 21.9 kB in 0s (70.2 kB/s) Selecting previously unselected package cowsay. dpkg: warning: files list file for package 'libssh2-1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libwrap0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcap2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam-ck-connector:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libc6:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtalloc2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libselinux1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libp11-kit0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-client3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbz2-1.0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcre3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpm2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgnutls26:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcroco3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'liblzma5:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsensors4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbsd0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common-data:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libss2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libblkid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libslang2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libacl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcomerr2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5support0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'e2fslibs:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'librtmp0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libidn11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcap0.8:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libattr1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdevmapper1.02.1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'odbcinst1debian2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libltdl7:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkeyutils1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcups2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsqlite3-0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libck-connector0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'zlib1g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libnl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfontconfig1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libudev0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsepol1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libmagic1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libk5crypto3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libunistring0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpg-error0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libusb-0.1-4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam0g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpopt0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgssapi-krb5-2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgeoip1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcurl3-gnutls:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtasn1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libuuid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgcrypt11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgdbm3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdbus-1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsysfs2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfreetype6:amd64' missing; assuming package has no files currently installed (Reading database ... 21908 files and directories currently installed.) Unpacking cowsay (from .../cowsay_3.03+dfsg1-4_all.deb) ... Processing triggers for man-db ... Setting up cowsay (3.03+dfsg1-4) ... root@debian:~# Everything works fine, but these warning messages are pretty annoying. Does anyone know how I can fix this?

    Read the article

  • Linked list example using threads

    - by Carl_1789
    I have read the following code of using CRITICAL_SECTION when working with multiple threads to grow a linked list. what would be the main() part which uses two threads to add to linked list? #include <windows.h> typedef struct _Node { struct _Node *next; int data; } Node; typedef struct _List { Node *head; CRITICAL_SECTION critical_sec; } List; List *CreateList() { List *pList = (List*)malloc(sizeof(pList)); pList->head = NULL; InitializeCriticalSection(&pList->critical_sec); return pList; } void AddHead(List *pList, Node *node) { EnterCriticalSection(&pList->critical_sec); node->next = pList->head; pList->head = node; LeaveCriticalSection(&pList->critical_sec); } void Insert(List *pList, Node *afterNode, Node *newNode) { EnterCriticalSection(&pList->critical_sec); if (afterNode == NULL) { AddHead(pList, newNode); } else { newNode->next = afterNode->next; afterNode->next = newNode; } LeaveCriticalSection(&pList->critical_sec); } Node *Next(List *pList, Node *node) { Node* next; EnterCriticalSection(&pList->critical_sec); next = node->next; LeaveCriticalSection(&pList->critical_sec); return next; }

    Read the article

  • Sequential searching with sorted linked lists

    - by John Graveston
    struct Record_node* Sequential_search(struct Record_node *List, int target) { struct Record_node *cur; cur = List->head ; if(cur == NULL || cur->key >= target) { return NULL; } while(cur->next != NULL) { if(cur->next->key >= target) { return cur; } cur = cur->next; } return cur; } I cannot interpret this pseudocode. Can anybody explain to me how this program works and flows? Given this pseudocode that searches for a value in a linked list and a list that is in an ascending order, what would this program return? a. The largest value in the list that is smaller than target b. The largest value in the list that is smaller than or same as target c. The smallest value in the list that is larger than or same as target d. Target e. The smallest value in the list that is larger than target And say that List is [1, 2, 4, 5, 9, 20, 20, 24, 44, 69, 70, 71, 74, 77, 92] and target 15, how many comparisons are occurred? (here, comparison means comparing the value of target)

    Read the article

  • Why is a linked list implementation considered linear?

    - by VeeKay
    My apologies for asking such a simple question. Instead of posting such basic question in SO, I felt that this is more apt a question here. I tried finding an answer for this but none of them are logically appealing or convincing to my understanding. Typically, computer memory is always linear. So is the term non linear used for a data structure in a logical sense? If so, to logically achieve non linearity in a linear computer memory, we use pointers. Right? In that case, if pointers are virtual implementations for achieving non linearity, Why would a data structure like linked list be considered linear if in reality the nodes are never physically adjacent?

    Read the article

  • Efficient Multiple Linear Regression in C# / .Net

    - by mrnye
    Does anyone know of an efficient way to do multiple linear regression in C#, where the number of simultaneous equations may be in the 1000's (with 3 or 4 different inputs). After reading this article on multiple linear regression I tried implementing it with a matrix equation: Matrix y = new Matrix( new double[,]{{745}, {895}, {442}, {440}, {1598}}); Matrix x = new Matrix( new double[,]{{1, 36, 66}, {1, 37, 68}, {1, 47, 64}, {1, 32, 53}, {1, 1, 101}}); Matrix b = (x.Transpose() * x).Inverse() * x.Transpose() * y; for (int i = 0; i < b.Rows; i++) { Trace.WriteLine("INFO: " + b[i, 0].ToDouble()); } However it does not scale well to the scale of 1000's of equations due to the matrix inversion operation. I can call the R language and use that, however I was hoping there would be a pure .Net solution which will scale to these large sets. Any suggestions? EDIT #1: I have settled using R for the time being. By using statconn (downloaded here) I have found it to be both fast & relatively easy to use this method. I.e. here is a small code snippet, it really isn't much code at all to use the R statconn library (note: this is not all the code!). _StatConn.EvaluateNoReturn(string.Format("output <- lm({0})", equation)); object intercept = _StatConn.Evaluate("coefficients(output)['(Intercept)']"); parameters[0] = (double)intercept; for (int i = 0; i < xColCount; i++) { object parameter = _StatConn.Evaluate(string.Format("coefficients(output)['x{0}']", i)); parameters[i + 1] = (double)parameter; }

    Read the article

  • Haskell Linear Algebra Matrix Library for Arbitrary Element Types

    - by Johannes Weiß
    I'm looking for a Haskell linear algebra library that has the following features: Matrix multiplication Matrix addition Matrix transposition Rank calculation Matrix inversion is a plus and has the following properties: arbitrary element (scalar) types (in particular element types that are not Storable instances). My elements are an instance of Num, additionally the multiplicative inverse can be calculated. The elements mathematically form a finite field (??2256). That should be enough to implement the features mentioned above. arbitrary matrix sizes (I'll probably need something like 100x100, but the matrix sizes will depend on the user's input so it should not be limited by anything else but the memory or the computational power available) as fast as possible, but I'm aware that a library for arbitrary elements will probably not perform like a C/Fortran library that does the work (interfaced via FFI) because of the indirection of arbitrary (non Int, Double or similar) types. At least one pointer gets dereferenced when an element is touched (written in Haskell, this is not a real requirement for me, but since my elements are no Storable instances the library has to be written in Haskell) I already tried very hard and evaluated everything that looked promising (most of the libraries on Hackage directly state that they wont work for me). In particular I wrote test code using: hmatrix, assumes Storable elements Vec, but the documentation states: Low Dimension : Although the dimensionality is limited only by what GHC will handle, the library is meant for 2,3 and 4 dimensions. For general linear algebra, check out the excellent hmatrix library and blas bindings I looked into the code and the documentation of many more libraries but nothing seems to suit my needs :-(. Update Since there seems to be nothing, I started a project on GitHub which aims to develop such a library. The current state is very minimalistic, not optimized for speed at all and only the most basic functions have tests and therefore should work. But should you be interested in using or helping out developing it: Contact me (you'll find my mail address on my web site) or send pull requests.

    Read the article

  • how to Compute the average probe length for success and failure - Linear probe (Hash Tables)

    - by fang_dejavu
    hi everyone, I'm doing an assignment for my Data Structures class. we were asked to to study linear probing with load factors of .1, .2 , .3, ...., and .9. The formula for testing is: The average probe length using linear probing is roughly Success-- ( 1 + 1/(1-L)**2)/2 or Failure-- (1+1(1-L))/2. we are required to find the theoretical using the formula above which I did(just plug the load factor in the formula), then we have to calculate the empirical (which I not quite sure how to do). here is the rest of the requirements **For each load factor, 10,000 randomly generated positive ints between 1 and 50000 (inclusive) will be inserted into a table of the "right" size, where "right" is strictly based upon the load factor you are testing. Repeats are allowed. Be sure that your formula for randomly generated ints is correct. There is a class called Random in java.util. USE it! After a table of the right (based upon L) size is loaded with 10,000 ints, do 100 searches of newly generated random ints from the range of 1 to 50000. Compute the average probe length for each of the two formulas and indicate the denominators used in each calculationSo, for example, each test for a .5 load would have a table of size approximately 20,000 (adjusted to be prime) and similarly each test for a .9 load would have a table of approximate size 10,000/.9 (again adjusted to be prime). The program should run displaying the various load factors tested, the average probe for each search (the two denominators used to compute the averages will add to 100), and the theoretical answers using the formula above. .** how do I calculate the empirical success?

    Read the article

  • Python list comprehension to return edge values of a list

    - by mvid
    If I have a list in python such as: stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] with length n (in this case 9) and I am interested in creating lists of length n/2 (in this case 4). I want all possible sets of n/2 values in the original list, for example: [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] is there some list comprehension code I could use to iterate through the list and retrieve all of those sublists? I don't care about the order of the values within the lists, I am just trying to find a clever method of generating the lists.

    Read the article

  • IE6 list issue - First list on page ignores horizontal margins

    - by user307922
    Hi Folks, I am creating a store in Magento and have a weird issue with IE6 and the ordered lists on my page. For some reason, IE6 ignores the horizontal margin on my first list. Not the first element in the list but the whole list. I have multiple list on the page. Here is a link to the offending page: http://byerofma.nexcess.net/products/pangean-furniture.html I have tried everything I can think of. Any ideas? Cheers, Chuck

    Read the article

  • filtering elements from list of lists in Python?

    - by user248237
    I want to filter elements from a list of lists, and iterate over the elements of each element using a lambda. For example, given the list: a = [[1,2,3],[4,5,6]] suppose that I want to keep only elements where the sum of the list is greater than N. I tried writing: filter(lambda x, y, z: x + y + z >= N, a) but I get the error: <lambda>() takes exactly 3 arguments (1 given) How can I iterate while assigning values of each element to x, y, and z? Something like zip, but for arbitrarily long lists. thanks, p.s. I know I can write this using: filter(lambda x: sum(x)..., a) but that's not the point, imagine that these were not numbers but arbitrary elements and I wanted to assign their values to variable names.

    Read the article

  • [C++] STL list - how to find a list element by its object fields

    - by Dominic Bou-Samra
    I have a list: list<Unit *> UnitCollection; containing Unit objects, which has an accessor like: bool Unit::isUnit(string uCode) { if(this->unitCode == uCode) return true; else return false; } How do I search my UnitCollection list by uCode and return the corresponding element (preferably it's index). I have looked at the find() method, but i'm not sure you can pass a boolean method in instead of a searched item parameter if that makes sense.

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • Singly-Linked Lists insert_back and isIncreasing

    - by rezivor
    I just finished writing a program that I can add, remove or print objects to a list, but I am having difficulty implementing two more functions that is insert_back, which inserts a value to the end of a list. Also,I have to modify the representation of a List and alter whatever methods are necessary to make insert_back run in constant time: O(1). This new operation should have the signature: void List::insert_back( const Object& data ); Also, isIncreasing, For example, for a list containing head-() (11) (8) (15) (3), isIncreasing() should return false. However, it would return true when working on a list containing head- () (7) (9) (15). This new operation should have the signature: bool List::isIncreasing() const; Thank you

    Read the article

  • List.clear() followed by List.add() not working.

    - by Vincent
    I have the following C# Class/Function: class Hand { private List<Card> myCards = new List<Card>(); public void sortBySuitValue() { IEnumerable<Card> query = from s in myCards orderby (int)s.suit, (int)s.value select s; myCards = new List<Card>(); myCards.AddRange(query); } } On a card Game. This works fine, however, I had trouble at first, instead of using myCards = new List(); to 'reset' myCards, I would use myCards.clear(), however, once I called the clear function, I would not be able to call myCards.add() or myCards.addRange(). The count would stay at zero. Is my current approach good? Is using LINQ to sort my cards good/bad?

    Read the article

  • how does linear probing handle this?

    - by Weadadada Awda
    • the hash function: h(x) = | 2x + 5 | mod M • a bucket array of capacity N • a set of objects with keys: 12, 44, 13, 88, 23, 94, 11, 39, 20, 16, 5 (to input from left to right) 4.a [5 pts] Write the hash table where M=N=11 and collisions are handled using linear probing. So I got up to here x x x x x 44 88 12 23 13 94 but the next variable should go after the 94 now, (the 11) but does it start from the beggining or what? thx

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • How to store a linked list in a struct in C

    - by LuckySlevin
    typedef struct child_list {int count; char vo[100]; child_list*next;} child_list; typedef struct parent_list { char vo[100]; child_list * head; int count; parent_list * next; } parent_list; As you can see there are two structures. child_list is used to create a linked list. And this list will be stored in a linked list of parent list. My problem is to display the child list which in the parent_list. My desire to get while displaying the linked list of parent_list: This lists work with this logic. I already made append and other stuff. For example if i enter ab cd ab ja cd ab Word Count List ab 3 cd->ja cd 2 ab->ab ja 1 cd The problematic part is displaying child_list which is in the parent_list nodes(List column of output). I don't know my question is clear please ask for further info.

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Python 2.7 list of lists manipulation functionality

    - by user3688163
    I am trying to perform several operations on the myList list of lists below and am having some trouble figuring it out. I am very new to Python. myList = [ ['Issue Id','1.Completeness for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','2.Validity check1 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','3.Validity check2 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','4.Completeness for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Issue Id','5.Validity check1 for RST','Break',73377,0,8.24976E+11,0,20140107], ['Liquidity','1. OTC - Null','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','2. OTC - Unmapped','Break',7778,43,2.27712E+11,579021732.8,20140110], ['Liquidity','3. RST - Null','Break',335120,0,1.01425E+12,0,20140110], ['Liquidity','4. RST - Unmapped','Break',334608,512,1.01351E+12,735465433.1,20140110], ['Liquidity','5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','2.Validity check1 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','3.Validity check2 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','4.Completeness for RST','Break',73594,3,8.5352E+11,69614602,20140110], ['Issue Id','5.Validity check1 for RST','Break',73597,0,8.5359E+11,0,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] I am trying to get the result below but do not know how to do so: newList = [ ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','4.Completeness for RST:5.Validity check1 for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Liquidity','1. OTC - Null','Break:2. OTC - Unmapped','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','3. RST - Null:4. RST - Unmapped:5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','4.Completeness for RST:5. RST - Valid','Break',73594,3,8.5352E+11,69614602,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] Rules to create newList. Create a new list in the newList list of lists if the values in the lists meet the following conditions: multiple lists that match on `myList[i][0]` and `myList[i][7]` but with have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from each other are just listed as is in newList if multiple lists match on both `myList[i][0]` (this is the type) and `myList[i][7]` (this is the date) are the same then create a new list for each set of lists with mathcing `myList[i][0]` and `myList[i][7]` that have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from the other lists with mathcing `myList[i][0]` and `myList[i][7]`. I also am trying to concatenate `myList[i][1]` separated by a ':' for all those lists with matching `myList[i][0]` and `myList[i][7]` and with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` that match. So essentially for this case only those lists in `myList` with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` are different from the other lists are then listed in newList. The above newList illustrates these results I am trying to achieve. If anyone has any ideas how to do this they would be greatly appreciated. Thank you!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >