Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 621/734 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Covariance and Contravariance inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • ASP.NET NamingContainer naming convention

    - by EOLeary
    The Background Hello! I'm working on a project in which the client has required a lot of things to happen on a single page, and this has resulted in a rather large blob of HTML being rendered out to the client browser. The main issue is with input tags (where runat="server" attribute is set), these tend to cause a drastic increase in markup size due to validation, updatepanel triggers, viewstate, and the control markup itself. I've done what I can to reduce the amount of triggers I'm using, I'm compressing the viewstate (to something like 8% of the original viewstate size), I've gotten rid of a lot of ASP.NET Validators and rolled my own, and and I've been using ClientIdMode to reduce the length of the ID attributes of many asp.net elements. All of these combined significantly reduces the amount of HTML being sent to the client, (for example going from 2 megabytes for a request down to 500-600 kb - these are HUGE pages, mind you). The Issue One area which I've been having trouble reducing is simply the auto-generated 'name' attribute of input elements. <input name="ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText" type="text" value="Investigation Week 1" maxlength="100" id="_taskTree_0__taskDetails_0__detailList_0__row_0__descriptionText_0" style="width:170px;"> As you can see above, the name attribute is 139 out of 297 characters, that's almost 50% of the tag markup taken up by that HUGE name. Does anyone have any ideas on how to stick a hook in somewhere in ASP.NET where I can somehow translate these or generate them differently; say instead of ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText, it could be a GUID like 0x0AEED4B6445A11E08F873606E0D72085, which is 105 characters shorter. Any help would be greatly appreciated!

    Read the article

  • Connection reset when calling disconnect() using enterprisedt's ftp java framework

    - by Frederik Wordenskjold
    I'm having trouble disconnecting from a ftp-server, using the enterprisedt java ftp framework. I can simply not call disconnect() on a FileTransferClient object without getting an error. I do not do anything, besides connecting to the server, and then disconnecting: // create client log.info("Creating FTP client"); ftp = new FileTransferClient(); // set remote host log.info("Setting remote host"); ftp.setRemoteHost(host); ftp.setUserName(username); ftp.setPassword(password); // connect to the server log.info("Connecting to server " + host); ftp.connect(); log.info("Connected and logged in to server " + host); // Shut down client log.info("Quitting client"); ftp.disconnect(); log.info("Example complete"); When running this, the log reads: INFO [test] 28 maj 2010 16:57:20.216 : Creating FTP client INFO [test] 28 maj 2010 16:57:20.263 : Setting remote host INFO [test] 28 maj 2010 16:57:20.263 : Connecting to server x INFO [test] 28 maj 2010 16:57:20.979 : Connected and logged in to server x INFO [test] 28 maj 2010 16:57:20.979 : Quitting client ERROR [FTPControlSocket] 28 maj 2010 16:57:21.026 : Read failed ('' read so far) And the stacktrace: com.enterprisedt.net.ftp.ControlChannelIOException: Connection reset at com.enterprisedt.net.ftp.FTPControlSocket.readLine(FTPControlSocket.java:1029) at com.enterprisedt.net.ftp.FTPControlSocket.readReply(FTPControlSocket.java:1089) at com.enterprisedt.net.ftp.FTPControlSocket.sendCommand(FTPControlSocket.java:988) at com.enterprisedt.net.ftp.FTPClient.quit(FTPClient.java:4044) at com.enterprisedt.net.ftp.FileTransferClient.disconnect(FileTransferClient.java:1034) at test.main(test.java:46) It should be noted, that I without problems can connect, and do stuff with the server, like getting a list of files in the current working directory. But I cant, for some reason, disconnect! I've tried using both active and passive mode. The above example is by the way copy/pasted from their own example. I cannot fint ANYTHING related to this by doing a Google-search, so I was hoping you have any suggestions, or experience with this issue.

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • What is the best way to automatically transpose a LilyPond source file into multiple keys?

    - by Michael Steele
    problem I'm using LilyPond to typeset sheet music for a church choir to perform. Depending on who is available on any given week, songs will be played in various keys. We have an amazing pianist who can play anything we throw at her and the guitarists will typically pencil in alternate chords, but I want to make things easier by having beautifully typeset sheet music available in any key we want. So say we're going to sing our ABCs. First I'll take whatever source transcriptions available and enter it into a LilyPond script: melody = \relative c' { c c g g a a g2 f f e e d d c2 } I want the ability to transpose this automatically, so if I want the whole thing in 'G' I wrap the song in a \transpose call like so: melody = \transpose c g \relative c' { c c g g a a g2 f f e e d d c2 } What I really want is to substitute something for the 'g' and generate the output for melody multiple times. Simple LilyPond variables don't seem to work here, and so far I've been unsuccessful in defining a scheme function to do this. What I've resorted to for the moment is taking the above file, call it twinkle.ly and turning it into an M4 script called twinkle.ly.m4, the contents of which look like this: melody = \transpose c _key \relative c' { c c g g a a g2 f f e e d d c2 } I then compile the while thing by executing the following line: > m4 -D _key=g twinkle.ly.m4 > twinkle_g.ly && lilypond twinkle_g.ly I've written a Makefile to do this for me, defining rules for every song I have and every key I'm interested in. question There's got to be a better way of going about this. Given that Lilypond supports embedded scheme, I would prefer to not use a macro preprocessed on it. Has anybody else come up with a solution to this same problem?

    Read the article

  • How to test the expectation on the eventSpy

    - by Lorraine Bernard
    I am trying to test a backbone.model when saving. Here's my piece of code. As you can see from the comment there is a problem with toHaveBeenCalledOnce method. P.S.: I am using jasmine 1.2.0 and Sinon.JS 1.3.4 describe('when saving', function () { beforeEach(function () { this.server = sinon.fakeServer.create(); this.responseBody = '{"id":3,"title":"Hello","tags":["garden","weekend"]}'; this.server.respondWith( 'POST', Routing.generate(this.apiName), [ 200, {'Content-Type': 'application/json'}, this.responseBody ] ); this.eventSpy = sinon.spy(); }); afterEach(function() { this.server.restore(); }); it('should not save when title is empty', function() { this.model.bind('error', this.eventSpy); this.model.save({'title': ''}); expect(this.eventSpy).toHaveBeenCalledOnce(); // TypeError: Object [object Object] has no method 'toHaveBeenCalledOnce' expect(this.eventSpy).toHaveBeenCalledWith(this.model, 'cannot have an empty title'); }); }); console.log(expect(this.eventSpy));

    Read the article

  • Card emulation via software NFC

    - by user85030
    After reading a lot of questions, i decided to post this one. I read that stock version of android does not support API's for card emulation. Also, we cannot write custom applications to secure element embedded in nfc controllers due to keys managed by google/samsung. I need to emulate a card (mifare or desfire etc). The option i can see is doing it via software. I have a ACR122U reader and i've tested that NFC P2P mode works fine with the Nexus-S that i have. 1) I came across a site that said that nexus s's NFC controller (pn532) can emulate a mifare 4k card. If this is true, can i write/read apdu commands to this emulated card? (Probably if i use a modded rom like cyanogenmod) 2) Can i write a android application that reads apdu commands sent from the reader and generate appropriate responses (if not fully, then upto some extent only). To do so, i searched that we need to patch nexus s with cynagenmod. Has someone tried emulating card via this method? I see that this is possible since we have products from access control companies offering mobile applications via which one can open doors e.g. http://www.assaabloy.com/en/com/Products/seos-mobile-access/

    Read the article

  • PHP Encrypt/Decrypt with TripleDes, PKCS7, and ECB

    - by Brandon Green
    I've got my encryption function working properly however I cannot figure out how to get the decrypt function to give proper output. Here is my encrypt function: function Encrypt($data, $secret) { //Generate a key from a hash $key = md5(utf8_encode($secret), true); //Take first 8 bytes of $key and append them to the end of $key. $key .= substr($key, 0, 8); //Pad for PKCS7 $blockSize = mcrypt_get_block_size('tripledes', 'ecb'); $len = strlen($data); $pad = $blockSize - ($len % $blockSize); $data .= str_repeat(chr($pad), $pad); //Encrypt data $encData = mcrypt_encrypt('tripledes', $key, $data, 'ecb'); return base64_encode($encData); } Here is my decrypt function: function Decrypt($data, $secret) { $text = base64_decode($data); $data = mcrypt_decrypt('tripledes', $secret, $text, 'ecb'); $block = mcrypt_get_block_size('tripledes', 'ecb'); $pad = ord($data[($len = strlen($data)) - 1]); return substr($data, 0, strlen($data) - $pad); } Right now I am using a key of test and I'm trying to encrypt 1234567. I get the base64 output from encryption I'm looking for, but when I go to decrypt I get a blank response. I'm not very well versed in encryption/decryption so any help is much appreciated!!

    Read the article

  • Ruby on Rails ActiveRecord/Include/Associations can't get my query to work

    - by Cypher
    I just started learning Rails and I'm just trying to set up query via associations. All the queries I try to write seem to be doing bizzare things and end up trying to query two tables parsed together with an '_' as one table. I have no clue why this would ever happen My tables are as follows: schools: id name variables: id name type var_entries: id variable_id entry school_entries: id school_id var_entry_id my rails association tables are $local = { :adapter => "mysql", :host => "localhost", :port => "3306".to_i, :database => "spy_2", :username =>"root", :password => "vertrigo" } class School < ActiveRecord::Base establish_connection $local has_many :school_entries has_many :var_entries, :through => school_entries end class Variable < ActiveRecord::Base establish_connection $local has_many :var_entries has_many :school_entries, :through => :var_entries end class VarEntry < ActiveRecord::Base establish_connection $local has_many_and_belongs_to :school_entries belongs_to :variables end class SchoolEntry < ActiveRecord::Base establish_connection $local belongs_to :school has_many :var_entries end I want to do this sql query: SELECT school_id, variable_id,rank FROM school_entries, variables, var_entries, schools WHERE var_entries.variable_id = variables.id AND school_entries.var_entry_id = var_entries.id AND schools.id = school_entries.school_id AND variables.type = 'number'; and put it into Rails notation: here is one of my many failed attempts schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "variables.type = 'number'") the error: 'const_missing': uninitialized constant VarEntry::Variables (NameError) if i remove variables schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "type = 'number'") the error is: Mysql::Error: Unkown column 'type' in 'where clause': SELECT * FROM 'var_entries' WHERE (type=number) (ActiveRecord::StatementInvalid) Can anyone tell me where I'm going horribly wrong?

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Can't bind string containing @ char with mysqli_stmt_bind_param

    - by Tirithen
    I have a problem with my database class. I have a method that takes one prepared statement and any number of parameters, binds them to the statement, executes the statement and formats the result into a multidimentional array. Everthing works fine until I try to include an email adress in one of the parameters. The email contains an @ character and that one seems to break everything. When I supply with parameters: $types = "ss" and $parameters = array("[email protected]", "testtest") I get the error: Warning: Parameter 3 to mysqli_stmt_bind_param() expected to be a reference, value given in ...db/Database.class.php on line 63 Here is the method: private function bindAndExecutePreparedStatement(&$statement, $parameters, $types) { if(!empty($parameters)) { call_user_func_array('mysqli_stmt_bind_param', array_merge(array($statement, $types), &$parameters)); /*foreach($parameters as $key => $value) { mysqli_stmt_bind_param($statement, 's', $value); }*/ } $result = array(); $statement->execute() or debugLog("Database error: ".$statement->error); $rows = array(); if($this->stmt_bind_assoc($statement, $row)) { while($statement->fetch()) { $copied_row = array(); foreach($row as $key => $value) { if($value !== null && mb_substr($value, 0, 1, "UTF-8") == NESTED) { // If value has a nested result inside $value = mb_substr($value, 1, mb_strlen($value, "UTF-8") - 1, "UTF-8"); $value = $this->parse_nested_result_value($value); } $copied_row[$ke<y] = $value; } $rows[] = $copied_row; } } // Generate result $result['rows'] = $rows; $result['insert_id'] = $statement->insert_id; $result['affected_rows'] = $statement->affected_rows; $result['error'] = $statement->error; return $result; } I have gotten one suggestion that: the array_merge is casting parameter to string in the merge change it to &$parameters so it remains a reference So I tried that (3rd line of the method), but it did not do any difference. How should I do? Is there a better way to do this without call_user_func_array?

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • problem generating pgp keys?

    - by pavankumar
    I'm using RSACryptoServiceProvider I've generated public key and private key. The keys generated by it are in the following format: Public key: <RSAKeyValue> <Modulus>m9bAoh2...eGNKYs=</Modulus> <Exponent>AQAB</Exponent> </RSAKeyValue> Private key: <RSAKeyValue> <Modulus>m9bAo...ZAIeGNKYs=</Modulus> <Exponent>AQAB</Exponent> <P>xGj/UcXs...R1lmeVQ==</P> <Q>yx6e18aP...GXzXIXw==</Q> <DP>NyxvnJ...1xAsEyQ==</DP> <DQ>La17Jycd...FhApEqwznQ==</DQ> <InverseQ>JrG7WCT...Hp3OWA==</InverseQ> <D>RdWsOFn....KL699Vh6HK0=</D> </RSAKeyValue> but using PGP Desktop i've generated keys like this - Public key: mQCNBEoOlp8BBACi/3EvBZ83ZduvG6YHu5F0P7Z3xOnpIsaPvTk0q+dnjwDUa5sU lEFbUZgDXSz7ZRhyiNqUOy+IG3ghPxpiKGBtldVpi33qaFCCEBiqsxRRpVCLgTUK HP2kH5ysrlFWkxTo =a4t9 Private key: lQHgBEoOlp8BBACi/3EvBZ83ZduvG6YHu5F0P7Z3xOnpIsaPvTk0q+dnjwDUa5sU lEFbUZgDXSz7ZRhyiNqUOy+IG3ghPxpiKGBtldVpi33qaFCCEBiqsxRRpVCLgTUK waBnEitQti3XgUUEZnz/rnXcQVM0QFBe6H5x8fMDUw== =CVPD So when I'm passing the keys generated by PGP Desktop it is able to do encryption and decryption perfectly but when im passing the keys generated by RSACryptoServiceProvider I'm not able to encrypt and decrypt? Can anyone please tell me how to generate keys in the pattern generated by PGP?

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • Fatal error: Uncaught exception 'Zend_Gdata_App_HttpException' with message 'Expected response code 200, got 401'

    - by Peak Reconstruction Wavelength
    I am using a php-script that has been working for years, but suddenly it aborts with Fatal error: Uncaught exception 'Zend_Gdata_App_HttpException' with message 'Expected response code 200, got 401' NoLinkedYouTubeAccount Error 401 It starts like this <?php function anmelden_yt($name,$passwort) { $yt_source = 'known'; $yt_api_key = 'key'; $yt = null; $authenticationURL= 'https://www.google.com/accounts/ClientLogin'; $httpClient = Zend_Gdata_ClientLogin::getHttpClient( $username = $name, $password = $passwort, $service = 'youtube', $client = null, $source = $yt_source, // a short string identifying your application $loginToken = null, $loginCaptcha = null, $authenticationURL); abschnitt("Login"); return new Zend_Gdata_YouTube($httpClient, $yt_source, NULL, $yt_api_key); } require_once("Zend/Gdata/ClientLogin.php"); require_once("Zend/Gdata/HttpClient.php"); require_once("Zend/Gdata/YouTube.php"); require_once("Zend/Gdata/App/MediaFileSource.php"); require_once("Zend/Gdata/App/HttpException.php"); require_once('Zend/Uri/Http.php'); require_once 'Zend/Loader.php'; Zend_Loader::loadClass('Zend_Gdata_YouTube'); Zend_Loader::loadClass('Zend_Gdata_AuthSub'); Zend_Loader::loadClass('Zend_Gdata_ClientLogin'); $yt = anmelden_yt($name,$pass); $videoFeed = $yt->getUserUploads('Google'); sleep(0.5); @ob_flush(); @flush(); ?> What could be the reason for this? ..................................................................................................

    Read the article

  • Multiple database with NHibernate

    - by Flint
    Hi, I have two databases. One from Oracle 10g. Another from Mysql. I have configured my web application with Nhibernate for Oracle and now I am in need of using the MySQL database. So how can i configure the hibernate.cfg.xml so that i can use both of the database at the same application? My current hibernate.cfg.xml is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string">Data Source=xe;Persist Security Info=True;User ID=hr;Password=hr;Unicode=True</property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle9Dialect</property> <!-- mapping files --> <mapping assembly="DataTransfer" /> </session-factory> </hibernate-configuration>

    Read the article

  • PHP Switch and Login

    - by Steve Rivera
    I'm fairly new with PHP and I am messing around with a login/registration system. I setup my sample website using a PHP-SWITCH script I found a while back: <?php switch($_GET['id']) { default: include('home.php'); /* LOGIN PAGES */ break; case "register_form": include ('includes/user_system/register_form.php'); } ? On the registration page the form links to my "register.php" which checks the validity of the form and to check for any blank fields and so on. "register.php" is supposed to refresh the page and add a reason to what the user did wrong when submitting the form. On my "register_form.php" page, which holds the actual form. This field is hidden until the user makes a mistake. <?php if (isset($reg_error)) { ?> , please try again. My "register.php" checks the form for all the errors. Here's the bit of code that will refresh the page with the reason for the error: // Check if any of the fields are missing if (empty($_POST['username']) || empty($_POST['password']) || empty($_POST['confirmpass'])) { // Reshow the form with an error $reg_error = 'One or more fields missing'; include 'register_form.php'; Now after I submit the form without any fields filled out I get the error code, but it refreshes to the actual "register_form.php". The problem with this is that because of my PHP-SWITCH script (helps me manage the site a lot easier) I don't have any formatting on that page. The actual URL to my "register_form.php" would be: "index.php?id=register_form.php". Now I have tried several different things such as changing it to: include 'index.php?id=register_form.php' And also changing it to: header(location:index.php?id=register_form.php') Unfortunately all this does is refresh the page without the reason for the error. I know this can be easily solved by just adding a Javascript Validator but I'd like to know if it is possible to refresh the page with the error using either "include" or "header()" while having a PHP-SWITCH script on the website.

    Read the article

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • My jquery cookies are not resetting, even though am using the correct code.

    - by Adam Libonatti-Roche
    My problem is that I am trying to reset some form cookies so when someone has completed their form, they are reset so it is possible for someone else to complete the form. Simple and obvious. But However many different lines of code I put in, the cookies just do not seem to be disappearing. I am using the remember function from the site below: Komodo Media So the details stay when they move away from the page: the code i have for the page starting is as follows: <script type="text/javascript"> function remember( selector ){ $(selector).each( function(){ //if this item has been cookied, restore it var name = $(this).attr('name'); if( $.cookie( name ) ){ if( $(this).is(':checkbox') ){ $(this).attr('checked',$.cookie( name )); }else{ $(this).val( $.cookie(name) ); } } //assign a change function to the item to cookie it $(this).change( function(){ if( $(this).is(':checkbox') ){ $.cookie(name, $(this).attr('checked'), { path: '/', expires: 1 }); }else{ $.cookie(''+name+'', $(this).val(), { path: '/', expires: 1 }); } }); }); } // JQUERY FOR THIS PAGE $(document).ready( function(){ remember("[name=username]"); remember("[name=firstname]"); remember("[name=lastname]"); remember("[name=email]"); remember("[name=password]"); remember("[name=address1]"); remember("[name=address2]"); remember("[name=postcode]"); remember("[name=country]"); } ); </script> And the code for resetting them is simple enough, as it takes the cookie name and sets it to null. However, this does not work as on returning to the form, all fields from before are still there. Any help with this would be brilliant.

    Read the article

  • Problem Displaying XML in Grid View-newbie

    - by Dean
    I am trying to do something in VisualWebDev 2008 Express that I thought would be simple, but it is not working. I want to display data from an XML file so I added the XMLDataSource to my page, pointed it to the XML file, and then added the GridView and connected it to the datasource. I am getting the following error: GridView - GridView1There was an error rendering the control. The data source for GridView with id 'GridView1' did not have any properties or attributes from which to generate columns. Ensure that your data source has content. Could someone please tell me what I might be doing wrong, TIA Dean A smippet from my XML is as follows: 6019 - Renaissance MS - New School Renaissance MS 7155 Hall Road Fairburn, GA 30213 NS-6019200-LA-01 New School Close-out NS-6019200 0.000000000000000e+000 The construction of the new Renaissance MS will be at the intersection of Jones/Hall Road, in the districts 7th & 9F and Land Lots 117, 143 & 146 of Fulton County, GA. The work includes the construction of the 180,500 square foot building that will house 34 standard classrooms, 12 standard science labs, 20 special purpose classrooms, cafeteria and litchen, gymnasium, media center and administrative offices. The site will also have multi-purpose playfields with track, softball field, tennis courts and basketball/volleyball court. Terry O'Brien Parsons Stevens Wilkinson Stang Newdow Barton Malow -84.62242 33.61497

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >