Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 342/410 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • How to get MySQL database to appear on index.php

    - by Teddy Truong
    Hi, I have a submission form on my website (index.php) and I have the data(user submissions) being stored into a MySQL database. Right now, I have the user submitting a post and then the page directs them to an update.php which shows what they inputed. However, I want all of the data in the database in MySQL to be shown on the index.php. It's a lot like a comment system. User submits a post... and sees their post above the other submitted posts all on the same page. I think I'm missing AJAX... ? Here is the code for index.php <div align="center"> <p>&nbsp;</p> <h2 align="center" class="Title"><em><strong>REDACTED</strong></em></h2> <form id="form1" name="form1" method="post" action="update.php"> <hr /> <label><br /> <form action="update.php" method="post"> REDACTED: <input type="text" name="text" /> <input type="submit" /> </form> </label> </form> </div> On update.php I have this: ?php $text = $_POST['text']; $myString = "REDACTED"; mysql_connect ("db----.net", "-----3", "------------") or die ('Error: ' . mysql_error()); mysql_select_db ("-----------"); $query="INSERT INTO TextArea (ID, text) VALUES ('NULL', '".$text."')"; mysql_query($query) or die ('Error updating database'); echo " $myString "," $text "; ?> Thanks a lot!

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • Access User Meta Data on User Registration in Wordpress

    - by Shadi Almosri
    Hiya, I am attempting to carry out a few functions when a user registers on a wordpress site. I have created a module for this which carries out the following function: add_action( 'user_register', 'tml_new_user_registered' ); function tml_new_user_registered( $user_id ) { //wp_set_auth_cookie( $user_id, false, is_ssl() ); //wp_redirect( admin_url( 'profile.php' ) ); $user_info = get_userdata($user_id); $subscription_value = get_user_meta( $user_id, "subscribe_to_newsletter", TRUE); if($subscription_value == "Yes") { //include("Subscriber.Add.php"); } echo "<pre>: "; print_r($user_info); print_r($subscription_value); echo "</pre>"; exit; } But it seems that i am not able to access any user meta data as at the end of this stage none of it is stored. Any ideas how i execute a function once Wordpress has completed the whole registration process of adding meta data into the relevant tables too? I attempted to use this: add_filter('user_register ','tml_new_user_registered',99); But with no luck unfortunately. Thanks in advance!

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • Using StructureMap to create classes by a name?

    - by Bevan
    How can I use StructureMap to resolve to an appropriate implementation of an interface based on a name stored in an attribute? In my project, I have many different kinds of widgets, each descending from IWidget, and each decorated with an attribute specifying the kind of associated element. To illustrate: [Configuration("header")] public class HeaderWidget : IWidget { } [Configuration("linegraph")] public class LineGraphWidget : IWidget { } When processing my (XML) configuration file, I want to obtain an instance of the appropriate concrete class based on the name of the element I'm processing. public IWidget CreateWidget(XElement definition) { var kind = definition.Name.LocalName; var widget = // What goes here? widget.Configure(definition); return widget; } Each definition should result in a different widget being created - I don't need or want the instances to be shared. In the past I've written plenty of code to do this kind of thing manually, including writing a custom "roll-your-own" IoC container for one project. However, one of my goals with this project is to become proficient with StructureMap instead of reinventing the wheel. I think I've already managed to set up automatic scanning of assemblies so that StructureMap knows about all my IWidget implementations: public class WidgetRegistration : Registry { public WidgetRegistration() { Scan( scanner => { scanner.AssembliesFromApplicationBaseDirectory(); scanner.AddAllTypesOf<IWidget>(); }); } } However, this isn't registering the names of my widgets with StructureMap. What do I need to add to make my scenario work? (While I am trying to use StructureMap in this project, an answer showing me how to solve this problem with a different DI/IoC tool would still be valuable.)

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • xml to hashmap - php

    - by csU
    Hi, Given an xml structure like this <gesmes:Envelope> <gesmes:subject>Reference rates</gesmes:subject> <gesmes:Sender> <gesmes:name>European Central Bank</gesmes:name> </gesmes:Sender> <Cube> <Cube time="2010-03-26"> <Cube currency="USD" rate="1.3353"/> <Cube currency="JPY" rate="124.00"/> <Cube currency="BGN" rate="1.9558"/> <Cube currency="CZK" rate="25.418"/> ... ... </Cube> </Cube> </gesmes:Envelope> how can i go about getting the values stored in to a hashmap or similar structure in php? Have been trying to do this for the last few hours now but cant manage it :D It is homework so i guess no full solutuins please( tho the actual assignment is to use the web services, i am just stuck with parsing it :D ). Maybe someone could show me a brief example for a made up xml file that i could apply to mine? Thanks

    Read the article

  • PHP Resize image down and crop using imagemagick

    - by mr12086
    I'm trying to downsize image uploaded by users at the time of upload. This is no problem but I want to get specific with sizes now. I'm looking for some advice on an algorithm im struggling to produce that will take any shape image - square or rectangle of any widths/heights and downsize it. This image needs to be downsized to a target box size (this changes but is stored in a variable).. So it needs to downsize the image so that both the width and height are larger than the width and height of the target maintaining aspect ratio. but only just.. The target size will be used to crop the image so there is no white space around the edges etc. I'm looking for just the algorithm behind creating the correct resize based on different dimension images - I can handle the cropping, even resizing in most cases but it fails in a few so i'm reaching out. I'm not really asking for PHP code more pseudo. Either is fine obviously. Thanks Kindly.

    Read the article

  • How can I automatically generate sql update scripts when some data is updated ?

    - by Brann
    I'd like to automatically generate an update script each time a value is modified in my database. In other words, if a stored procedure, or a query, or whatever updates column a with value b in table c (which as a pk column (i,j...k), I want to generate this : update c set a=b where i=... and j=... and k=... and store it somewhere (for example as a raw string in a table). To complicate things, I want the script to be generated only if the update has been made by a specific user. Good news is that I've got a primary key defined for all my tables. I can see how to do this using a trigger, but I would need to generate specific triggers for each table, and to update them each and every-time my schema changes. I guess there are some built-in ways to do this as SQL server sometimes need to store this kind of things (while using transactional replication for example), but couldn't find anything so far ... any ideas ? I'm also interested in ways to automatically generate triggers (probably using triggers - meta triggers, huh? - since I will need to update triggers automatically when the schema change)

    Read the article

  • Object addSubview only works in viewDidLoad

    - by DecodingSand
    Hi, I'm new to iPhone dev and need some help with adding subViews. I have a reusable object that I made that is stored in a separate .h .m and xib file. I would like to use this object in my main project's view controller. I have included the header and the assignment of the object generates no errors. I am able to load the object into my main project but can only do things with it inside my viewDidLoad method. I intend to have a few of these objects on my screen and am looking fora solution that is more robust then just hard wiring up multiple copies of the shape object. As soon as I try to access the object outside of the viewDidLoad it produces a variable unknown error - first use in this function. Here is my viewDidLoad method: shapeViewController *shapeView = [[shapeViewController alloc] initWithNibName:@"shapeViewController" bundle:nil]; [self.view addSubview: shapeView.view]; // This is the problem line // This code works changes the display on the shape object [shapeView updateDisplay:@"123456"]; ---- but the same code outside of the viewDidLoad generates the error. So to sum up, everything works except when I try to access the shapeView object in the rest of the methods. Thanks in advance

    Read the article

  • Modifying existing object attributes in Core Data after the fact

    - by glorifiedHacker
    In a previous question, I was looking for an alternative to modifying how "no date" was being stored in the date attribute of my NSManagedObject subclass. Previously, I had assigned nil to that attribute when a user didn't assign a date. In order to address sorting issues when using NSFetchedResultsController, I have decided to assign [NSDate distantFuture] to the date attribute when a user doesn't assign a date. However, given that this app is already in the wild, I need to update the Core Data store such that any existing nil date values are changed to [NSDate distantFuture]. What is the best way to make this change? The first thing that comes to mind is to iterate through all of the objects in the store in an array and change any nil values that are found. This could be limited to a one-time event by checking against a user defaults key that indicates whether this upgrade has been performed. Is there a way that I can do this with Core Data versioning instead? Or another method that doesn't involve me writing throw-away code?

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Help with Hashmaps in Java

    - by Crystal
    I'm not sure how I use get() to get my information. Looking at my book, they pass the key to get(). I thought that get() returns the object associated with that key looking at the documentation. But I must be doing something wrong here.... Any thoughts? import java.util.*; public class OrganizeThis { /** Add a person to the organizer @param p A person object */ public void add(Person p) { staff.put(p, p.getEmail()); System.out.println("Person " + p + "added"); } /** * Find the person stored in the organizer with the email address. * Note, each person will have a unique email address. * * @param email The person email address you are looking for. * */ public Person findByEmail(String email) { Person aPerson = staff.get(email); return aPerson; } private Map<Person, String> staff = new HashMap<Person, String>(); public static void main(String[] args) { OrganizeThis testObj = new OrganizeThis(); Person person1 = new Person("J", "W", "111-222-3333", "[email protected]"); testObj.add(person1); System.out.println(testObj.findByEmail("[email protected]")); } }

    Read the article

  • Persistence of texture parameters

    - by fen
    I use glBindTexture() to bind a previously created texture. After the glBindTexture() call I use glTexParameteri() to set MIN and MAG filter. No problem so far. Are those parameters I set using glTexParameteri() bound to the texture itself or are they lost if I bind another texture. Do i have to set them again? glGenTexture(1, &tex1); glGenTexture(1, &tex2); /* bind tex1 and set params */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex1); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, ...); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* do something */ /* bind tex2 and set params */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex2); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, ...); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* do something */ /* bind tex1 again */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex1); /* do i have to set the parameters from above again or are they stored with tex1? */

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Reading in bytes produced by PHP script in Java to create a bitmap

    - by Kareem
    I'm having trouble getting the compressed jpeg image (stored as a blob in my database). here is the snippet of code I use to output the image that I have in my database: if($row = mysql_fetch_array($sql)) { $size = $row['image_size']; $image = $row['image']; if($image == null){ echo "no image!"; } else { header('Content-Type: content/data'); header("Content-length: $size"); echo $image; } } here is the code that I use to read in from the server: URL sizeUrl = new URL(MYURL); URLConnection sizeConn = sizeUrl.openConnection(); // Get The Response BufferedReader sizeRd = new BufferedReader(new InputStreamReader(sizeConn.getInputStream())); String line = ""; while(line.equals("")){ line = sizeRd.readLine(); } int image_size = Integer.parseInt(line); if(image_size == 0){ return null; } URL imageUrl = new URL(MYIMAGEURL); URLConnection imageConn = imageUrl.openConnection(); // Get The Response InputStream imageRd = imageConn.getInputStream(); byte[] bytedata = new byte[image_size]; int read = imageRd.read(bytedata, 0, image_size); Log.e("IMAGEDOWNLOADER", "read "+ read + " amount of bytes"); Log.e("IMAGEDOWNLOADER", "byte data has length " + bytedata.length); Bitmap theImage = BitmapFactory.decodeByteArray(bytedata, 0, image_size); if(theImage == null){ Log.e("IMAGEDOWNLOADER", "the bitmap is null"); } return theImage; My logging shows that everything has the right length, yet theImage is always null. I'm thinking it has to do with my content type. Or maybe the way I'm uploading?

    Read the article

  • JPA - Entity design problem

    - by Yatendra Goel
    I am developing a Java Desktop Application and using JPA for persistence. I have a problem mentioned below: I have two entities: Country City Country has the following attribute: CountryName (PK) City has the following attribute: CityName Now as there can be two cities with same name in two different countries, the primaryKey for City table in the datbase is a composite primary key composed of CityName and CountryName. Now my question is How to implement the primary key of the City as an Entity in Java @Entity public class Country implements Serializable { private String countryName; @Id public String getCountryName() { return this.countryName; } } @Entity public class City implements Serializable { private CityPK cityPK; private Country country; @EmbeddedId public CityPK getCityPK() { return this.cityPK; } } @Embeddable public class CityPK implements Serializable { public String cityName; public String countryName; } Now as we know that the relationship from Country to City is OneToMany and to show this relationship in the above code, I have added a country variable in City class. But then we have duplicate data(countryName) stored in two places in the City class: one in the country object and other in the cityPK object. But on the other hand, both are necessary: countryName in cityPK object is necessary because we implement composite primary keys in this way. countryName in country object is necessary because it is the standard way of showing relashionship between objects. How to get around this problem?

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >