Search Results

Search found 21885 results on 876 pages for 'radix point'.

Page 524/876 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • can QuickGraph support these requirements? (includes database persistence support)

    - by Greg
    Hi, Would QuickGraph be able to help me out with my requirements below? (a) want to model a graph of nodes and directional relationships between nodes - for example to model web pages/files linked under a URL, or modeling IT infrastructure and dependencies between hardware/software. The library would include methods such as * Node.GetDirectParents() //i.e. there could be more than one direct parent for a node * Node.GetRootParents() //i.e. traverse the tree to the top root parent(s) for the given node * Node.GetDirectChildren() * Node.GetAllChildren() (b) have to persist the data to a database - so it should support SQL Server and ideally SQLite as well. If it does support these requirement then I'd love to hear: any pointers to any parts of QuickGraph to dig into? what is the best concept re it's usage in terms of how to use database persistence - is it a simpler design to assume every search/method works directly on the database, or does QuickGraph support smarts to be able to work in memory and the "save" to database all changes at an appropriate point in time (e.g. like ADO.net does with DataTable etc) Thanks in advance

    Read the article

  • IHTMLTxtRange.execCommand("Copy",false,null) fails due to IE settings

    - by srirambalaji-s
    We have a .Net application that is used for editing/rendering customized HTML documents. It is hosted in IE using the AxSHDocVw.AxWebBrowser controls. We proceed with navigating to "about:blank" page initially then we change the Document by writing our custom values into it. The problem we are facing is the call to IHTMLTxtRange.execCommand("Copy",false,null) is failing if we don't enable the IE Security Settings in the Internet Security zone (Scripting-Allow Programmatic Access to Clipboard ). In order to bypass the security setting ,I tried to point to a local html file initially while navigating. But this fails as soon as I modify the Document. I want to use the IHTMLTxtRange.execCommand("Copy",false,null) command so that I can customize our Copy/Paste operations. Is there any other way I can do this. Please share your ideas inorder to overcome this situation. Thanks. Sriram

    Read the article

  • Best Ruby ORM for Wrapping around Legacy SQL Server Database?

    - by Technocrat
    Hi. I found this answer and it sounds like almost exactly what I'm doing. I have heard mixed answers about whether or not datamapper can support SQL Server through dataobjects. Basically, we have an app that uses a consistently structured database, consistently named tables, etc in SQL Server. We're making all kinds of tools and stuff that have to interact with it, some of them remotely and so I decided that we need to create some common, simple access point to do read/write operations on the SQL Server app since it's API is all C# and other things I despise. Now my question is if anyone has any examples or projects they know of where a ruby ORM can essentially create models for another application's legacy database by defining the conventions of each model's pkeys, fkeys, table names, etc. Sequel is the only ORM I've used with SQL Server but never to do anything quite like this. Any suggestions?

    Read the article

  • Whats the best solution for a database used in conjunction with Maps in Android?

    - by Andrew
    Could someone please point me in the right direction. My project involves a database where users enter their address and other info from my website. This database is then referenced in my android application to show the locations of these addresses in my database. I have yet to start and just came up with this idea. My question is, what would be the best method to create a database easily modified through my website (mySQL, php, etc), and also easily referenced easily through Android and the Google Maps API? I need some ideas on the languages I will need to use to create this database and website so I can go buy the necessary books to start reading up. Thanks so much

    Read the article

  • What C# data types can be nullable types?

    - by Randy Minder
    Can someone give me a list, or point me to where I can find a list of C# data types that can be a nullable type? For example: I know that Nullable<int> is ok I know that Nullable<byte[]> is not. I'd like to know which types are nullable and which are not. BTW, I know I can test for this at runtime. However, this is for a code generator we're writing, so I don't have an actual type. I just know that a column is "string" or "int32" etc. Thanks.

    Read the article

  • Replace input type=file by an image

    - by nikospkrk
    Hi, Like a lot of people, I'd like to customize the ugly input type=file, and I know that it can't be done without some hacks and/or javascript. But, the thing is that in my case the upload file buttons are just for uploading images (jpeg|jpg|png|gif), so I was wondering if I could use a "clickable" image which would act exactly as an input type file (show the dialog box, and same $_FILE on submitted page). I found some workaround here, and this interesting one too (but does not work on Chrome =/). What do you guys do when you want to add some style to your file buttons? If you have any point of view about it, just hit the answer button ;) Cheers, Nicolas

    Read the article

  • WPF with code only

    - by rwallace
    I've seen a lot of questions about the merits of WPF here, and essentially every answer says it's the bee's knees, but essentially every answer also talks about things like XAML, in many cases graphic designers and Expression Blend etc. My question is, is it worth getting into WPF if you're a solo coder working in C# only? Specifically, I don't have a graphic designer, nor any great talent in that area myself; I don't use point-and-click tools; I write everything in C#, not XML. Winforms works fine in those conditions. Is the same true of WPF, or does it turn out that important functions can only be done in XAML, the default settings aren't intended for actual use and you have to have a graphic designer on the team to make things look good, etc., and somebody in my position would be better off to stick to Winforms?

    Read the article

  • "painting" one array onto another using python / numpy

    - by Nate
    I'm writing a library to process gaze tracking in Python, and I'm rather new to the whole numpy / scipy world. Essentially, I'm looking to take an array of (x,y) values in time and "paint" some shape onto a canvas at those coordinates. For example, the shape might be a blurred circle. The operation I have in mind is more or less identical to using the paintbrush tool in Photoshop. I've got an interative algorithm that trims my "paintbrush" to be within the bounds of my image and adds each point to an accumulator image, but it's slow(!), and it seems like there's probably a fundamentally easier way to do this. Any pointers as to where to start looking?

    Read the article

  • ColdFusion static key/value list?

    - by richardtallent
    I have a database table that is a dictionary of defined terms -- key, value. I want to load the dictionary in the application scope from the database, and keep it there for performance (it doesn't change). I gather this is probably some sort of "struct," but I'm extremely new to ColdFusion (helping out another team). Then, I'd like to do some simple string replacement on some strings being output to the browser, looping through the defined terms and replacing the terms with some HTML to define the terms (a hover or a link, details to be worked out later, not important). Can anyone point me in the right direction of: How to define the stucture (if that is what I need for a key/value pair list) How to query at the application start-up and reuse the list properly The best way to do the string replacement

    Read the article

  • removing a line from a text file?

    - by Blackbinary
    Hi all. I am working with a text file, which contains a list of processes under my programs control, along with relevant data. At some point, one of the processes will finish, and thus will need to be removed from the file (as its no longer under control). Here is a sample of the file contents (which has enteries added "randomly"): PID=25729 IDLE=0.200000 BUSY=0.300000 USER=-10.000000 PID=26416 IDLE=0.100000 BUSY=0.800000 USER=-20.000000 PID=26522 IDLE=0.400000 BUSY=0.700000 USER=-30.000000 So for example, if I wanted to remove the line that says PID=26416.... how could I do that, without writing the file over again? I can use external unix commands, however I am not very familiar with them so please if that is your suggestion, give an example. Thanks!

    Read the article

  • Does Google provide any Android tutorials that teach how to implement a Service?

    - by Bub
    I apologize in advance for the "newbie" nature of this question. Here is my predicament: I'm brand new to android and developing in general. I'm using android's SDK with eclipse Galileo. I've followed several tutorials to create different layouts. I've even learned recently how to use radio buttons and verify which ones were selected. Now I need to create a service that downloads and updates an xml file within the application. I've tried to locate a simple tutorial for services on Google's developer site but so far, so bad. If they exist could somebody point me in the right direction? On the other hand, I've been told Google's tutorials are a little out dated. Is that true? If so, are there any other tutorials that would hand-hold (and possibly over-explain) how to use a service to a true newbie for free (like google)? Any suggestions would be appreciated.

    Read the article

  • Microsoft Access to SQL Server - synchronization

    - by David Pfeffer
    I have a client that uses a point-of-sale solution involving an Access database for its back-end storage. I am trying to provide this client with a service that involves, for SLA reasons, the need to copy parts of this Access database into tables in my own database server which runs SQL Server 2008. I need to do this on a periodic basis, probably about 5 times a day. I do have VPN connectivity to the client. Is there an easy programmatic way to do this, or an available tool? I don't want to handcraft what I assume is a relatively common task.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • setting up a private network using linksys router

    - by user287745
    scenerio:- a database server running sql server 2005 and sql server management studio 2005 express editions a web server running IIS 5.0v using windows xp pro. two other computer having windows xp and windows 98 i have a linksys router which i use to access point for wireless (laptop) there are 5 sockets behind it four for clients and one for internet. i would like to setup a LAN- something like a private hosting area with two clients. would should i do? where to connect what and what would the changes in settnigs be. right now it uses dhcp or something to assign ips. where will the webserver be attached to the internet socket? where will the db server be attached? any guide, links, help thank you

    Read the article

  • What is the benefit of using int instead of bigint in this case?

    - by Yeti
    (MYSQL n00b) I have 3 tables: id = int(10), photo_id = bigint(20) PHOTO records limited to 3 million PHOTO: +-------+-----------------+ | id | photo_num | +-------+-----------------+ | 1 | 123456789123 | | 2 | 987654321987 | | 3 | 5432167894321 | +-------+-----------------+ COLOR: +-------+-----------------+---------+ | id | photo_num | color | +-------+-----------------+---------+ | 1 | 123456789123 | red | | 2 | 987654321987 | blue | | 3 | 5432167894321 | green | +-------+-----------------+---------+ SIZE: +-------+-----------------+---------+ | id | photo_num | size | +-------+-----------------+---------+ | 1 | 123456789123 | large | | 2 | 987654321987 | small | | 3 | 5432167894321 | medium | +-------+-----------------+---------+ Both COLOR and SIZE tables will have several million records. Q1: Is it better to change photo_num on COLOR and SIZE to int(10) and point it to PHOTO's id? Right now I use these: (PHOTO is no where in the picture) SELECT * from COLOR WHERE photo_num='xxx'; SELECT * from SIZE WHERE photo_num='xxx'; Q2: How will the SELECT query look if PHOTO id was used in COLOR, SIZE?

    Read the article

  • Loop colours from variables for graphics.py [Python 3.2]

    - by user1056548
    I am creating a graphics program that draws 100 x 100 squares next to each other depending on the user-specified grid size. The user also inputs 4 colours for the squares to be coloured (e.g. if they enter red,green,blue,yellow the squares will be coloured in that order, repeating the colours). Is it possible to loop the colours from the variables the user has given? Here is what I have so far: def main(): print ("Please enter four comma seperated colours e.g.: 'red,green,blue,yellow'\n\ Allowed colours are: red, green, blue, yellow and cyan") col1, col2, col3, col4 = input("Enter your four colours: ").split(',') win = GraphWin ("Squares", 500, 500) colours = [col1, col2, col3, col4] drawSquare (win, col1, col2, col3, col4, colours) win.getMouse() win.close() def drawSquare(win, col1, col2, col3, col4, colours): for i in range (4): for j in range (len(colours)): colour = colours[j] x = 50 + (i * 50) circle = Circle (Point (x,50), 20) circle.setFill(colour) circle.draw(win) I think I should be using a list in some way, but can't work out exactly how to do it. Can anybody help?

    Read the article

  • Why does my program occasionally segfault when out of memory rather than throwing std::bad_alloc?

    - by Bradford Larsen
    I have a program that implements several heuristic search algorithms and several domains, designed to experimentally evaluate the various algorithms. The program is written in C++, built using the GNU toolchain, and run on a 64-bit Ubuntu system. When I run my experiments, I use bash's ulimit command to limit the amount of virtual memory the process can use, so that my test system does not start swapping. Certain algorithm/test instance combinations hit the memory limit I have defined. Most of the time, the program throws an std::bad_alloc exception, which is printed by the default handler, at which point the program terminates. Occasionally, rather than this happening, the program simply segfaults. Why does my program occasionally segfault when out of memory, rather than reporting an unhandled std::bad_alloc and terminating?

    Read the article

  • Java + GWT + GSON on server side

    - by Jan
    Hi everybody. I already read that there is no possibility to run GSON in GWT client code, but that it is possible to run it in server code. The latter one is which I'm trying to achive, but not getting to work. I thought any class within the com.whatever.server package has access to the whole JRE namespace including reflection. It seems that that is not the point. So how managed all those developers to use GSON in GWT server code? (I'm new to GWT, so the answer may be really easy.) Thanks.

    Read the article

  • SQLite Transaction fills a table BEFORE the transaction is commited

    - by user1500403
    Halo I have a code that creates a datatable (in memory) from a select SQL statement. However I realised that this datatable is filling during the procedure rather as a result of the transaction comit statment, it does the job but its slow. WHat amI doing wrong ? Inalready.Clear() 'clears a dictionary Using connection As New SQLite.SQLiteConnection(conectionString) connection.Open() Dim sqliteTran As SQLite.SQLiteTransaction = connection.BeginTransaction() Try oMainQueryR = "SELECT * FROM detailstable Where name= :name AND Breed= :Breed" Dim cmdSQLite As SQLite.SQLiteCommand = connection.CreateCommand() Dim oAdapter As New SQLite.SQLiteDataAdapter(cmdSQLite) With cmdSQLite .CommandType = CommandType.Text .CommandText = oMainQueryR .Parameters.Add(":name", SqlDbType.VarChar) .Parameters.Add(":Breed", SqlDbType.VarChar) End With Dim c As Long = 0 For Each row As DataRow In list.Rows 'this is the list with 500 names If Inalready.ContainsKey(row.Item("name")) Then Else c = c + 1 Form1.TextBox1.Text = " Fill .... " & c Application.DoEvents() Inalready.Add(row.Item("name"), row.Item("Breed")) cmdSQLite.Parameters(":name").Value = row.Item("name") cmdSQLite.Parameters(":Breed").Value = row.Item("Breed") oAdapter.Fill(newdetailstable) End If Next oAdapter.FillSchema(newdetailstable, SchemaType.Source) Dim z = newdetailstable.Rows.Count 'At this point the newdetailstable is already filled up and I havent even comited the transaction ' sqliteTran.Commit() Catch ex As Exception End Try End Using

    Read the article

  • IPP linker errors on cygwin

    - by Jason Sundram
    I've built a program that uses mkl and ipp that runs on mac and linux. I'm now building that program for Windows using cygwin and gcc, and can't get it to link. The errors I'm getting are: Warning: .drectve -defaultlib:"uuid.lib" ' unrecognized ../../../bin/libMath.a(VectorUtility.cxx.o):VectorUtility.cxx:(.text+0x95): undefined reference to _ippGetLibVersion' ../../../bin/libMath.a(VectorUtility.cxx.o):VectorUtility.cxx:(.text+0x157): undefined reference to `_ippsWinHann_32f_I' (and many more like that). I'm using link path: /opt/intel/IPP/6.1.2.041/ia32/lib and linking to the following: ippiemerged, ippimerged, ippmemerged, ippmmerged, ippsemerged, ippsmerged and ippcorel. Can someone point me to what I'm doing wrong?

    Read the article

  • How does one advance in programming?

    - by Joe Barr
    I really have the felling as if I'm stuck in my own craft. I've been developing and learning for a while now and keep having the feeling that I should advance more, or even be more knowledgeable. I've started projects I didn't finish because I thought I lacked the knowledge and the skill to make that feature work just right, or to make that code magically appear on my screen. I've read books I didn't finish, thinking I wasn't advanced enough for the subjects they covered. I've been around long enough to know that everything comes with experience, hard work and dedication. Having said this, I just want to be able to work without getting stuck on a particular problem that involves my cluelessness of a language or a tool feature. My question to you would be, how does one advance in programming? What are the secrets (if any) to advance to the point of fluency in a particular language or a task. Thank you!

    Read the article

  • Where should I store user config data? Specificaly the path to the data file?

    - by jamone
    I have an app using a SQLite db, and I need the ability for the user to move the data file and point the app to where it moved to. I used the Entity Framework to create the model, and by default it puts the connection string in the App.Config file. From what I've read if I make changes to the connection string there then they won't take effect until the app is restarted. That seems a bit clunky for my use. I see how I can init my model and pass in a custom string but I'm unsure what the best practice is in where to store basic user prefrences such as this? Ini, Registry, somewhere else? I don't want the user to have to "Open" the file each time, just when it relocates and then the app will try to auto open from then on.

    Read the article

  • Failed to convert parameter value from a Guid to a String

    - by user320460
    Hello, I am at the end of my knowledge and googled for the answer too but no luck :/ Week ago everything worked well. I did a revert on the repository, recreated the tableadapter etc... nothing helped. When I try to save in my application I get an SystemInvalidCastException at this point: PersonListDataSet.cs: partial class P_GroupTableAdapter { public int Update(PersonListDataSet.P_GroupDataTable dataTable, string userId) { this.Adapter.InsertCommand.Parameters["@userId"].Value = userId; this.Adapter.DeleteCommand.Parameters["@userId"].Value = userId; this.Adapter.UpdateCommand.Parameters["@userId"].Value = userId; return this.Update(dataTable); **<-- Exception occurs here** } } Everything is stuck here because a Guid - and I checked the datatable preview with the magnifier tool its really a true Guid in the column of the datatable - can not be converted to a string ??? How can that happen?

    Read the article

  • Dynamically/recursively building hashes in Perl?

    - by Gaurav Dadhania
    I'm quite new to Perl and I'm trying to build a hash recursively and getting nowhere. I tried searching for tutorials to dynamically build hashes, but all I could find were introductory articles about hashes. I would be grateful if you point me towards the right direction or suggest a nice article/tutorial. I'm trying to read from a file which has paths in the form of one/two/three four five/six/seven/eight and I want to build a hash like VAR = { one : { two : { three : "" } } four : "" five : { six : { seven : { eight : "" } } } } The script I'm using currently is : my $finalhash = {}; my @input = <>; sub constructHash { my ($hashrf, $line) = @_; @elements = split(/\//, $line); if(@elements > 1) { $hashrf->{shift @elements} = constructHash($hashrf->{$elements[0]}, @elements ); } else { $hashrf->{shift @elements} = ""; } return $hashrf; } foreach $lines (@input) { $finalhash = constructHash($finalhash, $lines); }

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >