Search Results

Search found 3282 results on 132 pages for 'individual'.

Page 103/132 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • How do I integrate GUI elements from different classes at runtime?

    - by thefourthdoctor
    I'm trying to "future-proof" an application that I'm writing by splitting out those elements that are likely to change over time. In my application I need to be able to adapt to changes in the output format (e.g. today I output to a CSV file, in the future I may need to output directly to a SQL Server database, or to a web service, etc.). I'm handling this by defining an abstract class ("OutputProvider") that I will subclass for each individual case. The one aspect of this that has me stumped is how to provide a configuration GUI that is specific to each concrete class. I have a settings dialog with a tab for output configuration. On that tab I intend to provide a drop-down to select the provider and a JPanel beneath it to hold the contents of the provider-specific GUI. How do I get the right GUI in that panel at runtime and have it behave correctly with respect to events? Also, a bonus would be if there was a way to do this such that in order to add support for new providers I could simply provide a new jar or class file to be dropped in a specific folder and the application could pick that up at startup. I'm using NetBeans and Swing.

    Read the article

  • Using maven to distribute a swing application that can have each dependency individually tracked

    - by tms
    I'm moving my project to Maven and eventually OSGi. I currently distribute the project is a large Zip file with all the dependencies. Although my projects code is only 20% of the total package I have to redistribute all the dependency. With smaller independent modules this may be even less. Looking here on stack overflow it seems that to keep my current plan the maven-assembly-plugin should do the trick. I was considering having a base installer that would look at a XML manifest, then collect all the libraries that needed to be updated. This would mean that libraries that change occasionally would be downloaded less often. This also makes since for something like OSGi plugins (which could have independent release schedules). In essence I want my software to look and manage individual libraries, and download on demand (based on the manifest). I was wondering if there is a "maven way" of generating this Manifest and publishing all the libraries to a website? I believe the deploy life-cycle would do the second step. As an alternative, is there a OpenSource Java library that does this type of deployment? I don't want to embed Maven or something larger with the distributed code. The application is not for coders, the simpler the better, and the smaller the installer the better.

    Read the article

  • Exception error in Erlang

    - by Jim
    So I've been using Erlang for the last eight hours, and I've spent two of those banging my head against the keyboard trying to figure out the exception error my console keeps returning. I'm writing a dice program to learn erlang. I want it to be able to call from the console through the erlang interpreter. The program accepts a number of dice, and is supposed to generate a list of values. Each value is supposed to be between one and six. I won't bore you with the dozens of individual micro-changes I made to try and fix the problem (random engineering) but I'll post my code and the error. The Source: -module(dice2). -export([d6/1]). d6(1) - random:uniform(6); d6(Numdice) - Result = [], d6(Numdice, [Result]). d6(0, [Finalresult]) - {ok, [Finalresult]}; d6(Numdice, [Result]) - d6(Numdice - 1, [random:uniform(6) | Result]). When I run the program from my console like so... dice2:d6(1). ...I get a random number between one and six like expected. However when I run the same function with any number higher than one as an argument I get the following exception... **exception error: no function clause matching dice2:d6(1, [4|3]) ... I know I I don't have a function with matching arguments but I don't know how to write a function with variable arguments, and a variable number of arguments. I tried modifying the function in question like so.... d6(Numdice, [Result]) - Newresult = [random:uniform(6) | Result], d6(Numdice - 1, Newresult). ... but I got essentially the same error. Anyone know what is going on here?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • What OOP pattern to use when only adding new methods, not data?

    - by Jonathon Reinhart
    Hello eveyone... In my app, I have deal with several different "parameters", which derive from IParameter interface, and also ParamBase abstract base class. I currently have two different parameter types, call them FooParameter and BarParameter, which both derive from ParamBase. Obviously, I can treat them both as IParameters when I need to deal with them generically, or detect their specific type when I need to handle their specific functionality. My question lies in specific FooParameters. I currently have a few specific ones with their own classes which derive from FooParameter, we'll call them FP12, FP13, FP14, etc. These all have certain characteristics, which make me treat them differently in the UI. (Most have names associated with the individual bits, or ranges of bits). Note that these specific, derived FP's have no additional data associated with them, only properties (which refer to the same data in different ways) or methods. Now, I'd like to keep all of these parameters in a Dictionary<String, IParameter> for easy generic access. The problem is, if I want to refer to a specific one with the special GUI functions, I can't write: FP12 fp12 = (FP12)paramList["FP12"] because you can't downcast to a derived type (rightfully so). But in my case, I didn't add any data, so the cast would theoretically work. What type of programming model should I be using instead? Thanks!

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • fix a columns width when a list is bound to a datagridview

    - by Andy
    I have a list that I have bound to a datagridview. I want the first column to be a fixed size. The data is bound to the dataGridView and I can't seem to find a way to access an individual colums properties. if I try myDatagridview.colums[0] I get an index out of bounds, since it says the columns count is 0. private DataGridView setUpDataGrid(List<NVRlineVal> _NVRData) { //setup dataGridView DataGridView NVRDataGridView = new System.Windows.Forms.DataGridView(); NVRDataGridView.ColumnHeadersHeightSizeMode = System.Windows.Forms.DataGridViewColumnHeadersHeightSizeMode.AutoSize; NVRDataGridView.Dock = System.Windows.Forms.DockStyle.Fill; NVRDataGridView.AutoSizeColumnsMode = DataGridViewAutoSizeColumnsMode.Fill; NVRDataGridView.Location = new System.Drawing.Point(3, 3); NVRDataGridView.Name = "NVRDataGridView" + nvrIndex; NVRDataGridView.RowHeadersWidthSizeMode = System.Windows.Forms.DataGridViewRowHeadersWidthSizeMode.AutoSizeToDisplayedHeaders; NVRDataGridView.Size = new System.Drawing.Size(380, 401); NVRDataGridView.TabIndex = 0; NVRDataGridView.DataSource = _NVRData; var test = NVRDataGridView.Columns; NVRDataGridView.DataMember = "devState"; DataGridViewAutoSizeColumnMode.None; } Any ideas on how to have a fixed column width for only one of these columns, the rest will autosize?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • How can a SVN::Error callback identify the context from which it was called

    - by Colin Fine
    I've written some fairly extensive Perl modules and scripts using the Perl bindings SVN::Client etc. Since the calls to SVN::Client are all deep in a module, I have overridden the default error handling. So far I have done so by setting $SVN::Error::handler = undef as described in [1], but this makes the individual calls a bit messy because you have to remember to make each call to SVN::Client in list context and test the first value for errors. I would like to switch to using an error handler I would write; but $SVN::Error::handler is global, so I can't see any way that my callback can determine where the error came from, and what object to set an error code in. I wondered if I could use a pool for this purpose: so far I have ignored pools as irrelevant to working in Perl, but if I call a SVN::Client method with a pool I have created, will any SVN::Error object be created in the same pool? Has anybody any knowledge or experience which bears on this? [1]: http://search.cpan.org/~mschwern/Alien-SVN-1.4.6.0/src/subversion/subversion/bindings/swig/perl/native/Core.pm#svn_error_t_-_SVN::Error SVN::Core POD

    Read the article

  • Add an event to HTML elements with a specific class.

    - by Juan C. Rois
    Hello everybody, I'm working on a modal window, and I want to make the function as reusable as possible. Said that, I want to set a few anchor tags with a class equals to "modal", and when a particular anchor tag is clicked, get its Id and pass it to a function that will execute another function based on the Id that was passed. This is what I have so far: // this gets an array with all the elements that have a class equals to "modal" var anchorTrigger = document.getElementsByClassName('modal'); Then I tried to set the addEventListener for each item in the array by doing this: var anchorTotal = anchorTrigger.length; for(var i = 0; i < anchorTotal ; i++){ anchorTrigger.addEventListener('click', fireModal, false); } and then run the last function "fireModal" that will open the modal, like so: function fireModal(){ //some more code here ... } My problem is that in the "for" loop, I get an error saying that anchorTrigger.addEvent ... is not a function. I can tell that the error might be related to the fact that I'm trying to set up the "addEventListener" to an array as oppose to individual elements, but I don't know what I'm supposed to do. Any help would be greatly appreciated.

    Read the article

  • Android listing design problem with cursors

    - by Priyank
    Hi. I have a following situation in my android app. I have an app that fetches messages from inbox, sent items and drafts based on search keywords. I use to accomplish this by fetching cursors for each manually based on selection by user and then populating them in a custom data holder object. Filter those results based on given keywords and then manually render view with respective data. Someone suggested that I should use a custom Cursor adapter to bind view and my cursor data. So I tried doing that. Now what I am doing is this: Fetch individual cursors for inbox, sent items and drafts. Merge them into one using Merge cursor and then pass that back to my CursorAdapter implmentation. Now where or how do I filter my cursor data based on keywords; because now binding will ensure that they are directly rendered to view on list. Also, some post fetching operation like fetching sender's contact pic and all will be something that I do not want to move to adapter. If I do all this processing in adapter; it'll be heavy and ugly. How could I have designed it better such that it performs and the responsibilities are shared and distributed. Any ideas will be helpful.

    Read the article

  • Best approach for Java/Maven/JPA/Hibernate build with multiple database vendor support?

    - by HDave
    I have an enterprise application that uses a single database, but the application needs to support mysql, oracle, and sql*server as installation options. To try to remain portable we are using JPA annotations with Hibernate as the implementation. We also have a test-bed instance of each database running for development. The app is building nicely in Maven, and I've played around with the hibernate3-maven-plugin and can auto-generate DDL for a given database dialect. What is the best way to approach this so that individual developers can easily test against all three databases and our Hudson based CI server can build things propertly. More specifically: 1) I thought the hbm2ddl goal in the hibernate3-maven-plugin would just generate a schema file, but apparently it connects to a live database and attempts to create the schema. Is there a way to have this just create the schema file for each database dialect without connecting to a database? 2) If the hibernate3-maven-plug insists on actually creating the database schema, is there a way to have it drop the database and recreate it before creating the schema? 3) I am thinking that each developer (and the hudson build machine) should have their own separate database on each database server. Is this typical? 4) Will developers have to run Maven three times...once for each database vendor? If so, how do I merge the results on the build machine? 5) There is a hbm2doc goal within hibernate3-maven-plugin. It seems overkill to run this three times...I gotta believe it'd be nearly identical for each database.

    Read the article

  • strange memory error when deleting object from Core Data

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements from the xml file. This method looks like this: -(void) emptyDataContext { NSMutableArray* mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Condition" :@"road" :NO :managedObjectContext]; NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the preceeding as it did the first time. All is well until its time to delete the core data objects again. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. See this image for the debugger window as it appears when walking through the deletion FOR loop on the second iteration. Conditions is the fetched array of 7 objects with the objects below. Condition should be an individual condition. link text As you can see 'condition' does not match any of the objects in the 'conditions' array. I'm sure this is why I'm getting the memory access errors. Just not sure why this fetch (or the FOR) is returning a wrong reference. All the code that successfully performes this function on the first iteration is used in the second but with very different results. Thanks in advance for the help!

    Read the article

  • emacs lisp mapcar doesn't apply function to all elements?

    - by Stephen
    Hi, I have a function that takes a list and replaces some elements. I have constructed it as a closure so that the free variable cannot be modified outside of the function. (defun transform (elems) (lexical-let ( (elems elems) ) (lambda (seq) (let (e) (while (setq e (car elems)) (setf (nth e seq) e) (setq elems (cdr elems))) seq)))) I call this on a list of lists. (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (mapcar (transform elems) seq-list))) => ((10 1 8 3 6 5 4 3 2 1) ("a" "b" "c" "d" "e" "f")) It does not seem to apply the function to the second element of the list provided to tester(). However, if I explicitly apply this function to the individual elements, it works... (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (list (funcall (transform elems) (car seq-list)) (funcall (transform elems) (cadr seq-list))))) => ((10 1 8 3 6 5 4 3 2 1) ("a" 1 "c" 3 "e" 5)) If I write a simple function using the same concepts as above, mapcar seems to work... What could I be doing wrong? (defun transform (x) (lexical-let ( (x x) ) (lambda (y) (+ x y)))) (defun tester (seq) (let ( (x 1) ) (mapcar (transform x) seq))) (tester (list 1 3)) => (2 4) Thanks

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Matlab matrix translation and rotation multiple times

    - by pinnacler
    I have a map of individual trees from a forest stored as x,y points in a matrix. I call it fixedPositions. It's cartesian and (0,0) is the origin. I would like 0/360 degrees to be the top of the screen and 90 degrees to be to the right. Given a velocity and a heading, i.e. .5 m/s and 60 degrees (2 o'clock equivalent on a watch), how do I rotate that x,y points, so that the new origin is centered at (.5cos(60),.5sin(60)) and 60 degrees is now at the top of the screen? Then if I were to give you another heading and speed, i.e. 0 degrees and 2m/s, it should calculate it from the last point, not the original fixedPositions origin. I've wasted my day trying to figure this out. I wish I took matrix algebra but I'm at a loss. I tried doing cos(30) and even those wouldn't compute correctly, which after an hour I realize were in radians.

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • Dealing with array of IntPtr

    - by Padu Merloti
    I think I'm close and I bet the solution is something stupid. I have a C++ native DLL where I define the following function: DllExport bool __stdcall Open(const char* filePath, int *numFrames, void** data); { //creates the list of arrays here... don't worry, lifetime is managed somewhere else //foreach item of the list: { BYTE* pByte = GetArray(i); //here's where my problem lives *(data + i * sizeofarray) = pByte; } *numFrames = total number of items in the list return true; } Basically, given a file path, this function creates a list of byte arrays (BYTE*) and should return a list of pointers via the data param. Each pointing to a different byte array. I want to pass an array of IntPtr from C# and be able to marshal each individual array in order. Here's the code I'm using: [DllImport("mydll.dll",EntryPoint = "Open")] private static extern bool MyOpen( string filePath, out int numFrames, out IntPtr[] ptr); internal static bool Open( string filePath, out int numFrames, out Bitmap[] images) { var ptrList = new IntPtr[512]; MyOpen(filePath, out numFrames, out ptrList); images = new Bitmap[numFrames]; var len = 100; //for sake of simplicity for (int i=0; i<numFrames;i++) { var buffer = new byte[len]; Marshal.Copy(ptrList[i], buffer, 0, len); images[i] = CreateBitmapFromBuffer(buffer, height, width); } return true; } Problem is in my C++ code. When I assign *(data + i * sizeofarray) = pByte; it corrupts the array of pointers... what am I doing wrong?

    Read the article

  • Is it possible to refer to metadata of the target from within the target implementation in MSBuild?

    - by mark
    Dear ladies and sirs. My msbuild targets file contains the following section: <ItemGroup> <Targets Include="T1"> <Project>A\B.sln"</Project> <DependsOnTargets>The targets T1 depends on</DependsOnTargets> </Targets> <Targets Include="T2"> <Project>C\D.csproj"</Project> <DependsOnTargets>The targets T2 depends on</DependsOnTargets> </Targets> ... </ItemGroup> <Target Name="T1" DependsOnTargets="The targets T1 depends on"> <MSBuild Projects="A\B.sln" Properties="Configuration=$(Configuration)" /> </Target> <Target Name="T2" DependsOnTargets="The targets T2 depends on"> <MSBuild Projects="C\D.csproj" Properties="Configuration=$(Configuration)" /> </Target> As you can see, A\B.sln appears twice: As Project metadata of T1 in the ItemGroup section. In the Target statement itself passed to the MSBuild task. I am wondering whether I can remove the second instance and replace it with the reference to the Project metadata of the target, which name is given to the Target task? Exactly the same question is asked for the (Targets.DependsOnTargets) metadata. It is mentioned twice much like the %(Targets.Project) metadata. Thanks. EDIT: I should probably describe the constraints, which must be satisfied by the solution: I want to be able to build individual projects with ease. Today I can simply execute msbuild file.proj /t:T1 to build the T1 target and I wish to keep this ability. I wish to emphasize, that some projects depend on others, so the DependsOnTargets attribute is really necessary for them.

    Read the article

  • SQL-Server: Is there an equivalent of a trigger for general stored procedure execution

    - by Arj
    Hi All, Hope you can help. Is there a way to reliably detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • Best practice to create WPF wrapper application displaying screens on demand.

    - by Robbie
    Context: I'm developing a WPF application which will contain a lot of different "screens". Each screen contains a which on its turn contains all the visual elements. Some elements trigger events (e.g., checkboxes), a screen has individual resources, etc. The main application is "wrapper" around these screens: it contains a menubar, toolbar, statusbar and alike (in a DockPanel) and space to display one screen. Through the menubar, the user can choose which screen he wants to display. Goal: I want to dynamically load & display & (event)handle one screen in the space in the main application. I don't want to copy & paste all the "wrapper" stuff in all the different screens. And As I have many complex screens (around 300 - luckily auto-generated), I don't want to load all of them at the start of the application, but only upon request. Question: What do you recommend as the best way to realize this? What kind of things should I use and investigate: Pages or windows or User Control for the screens? Does this affect the event handling?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >