Search Results

Search found 26931 results on 1078 pages for 'bit fields'.

Page 197/1078 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • PHP - Processing Invalid XML

    - by Paul
    I'm using SimpleXML to load in some xml files (which I didn't write/provide and can't really change the format of). Occasionally (eg one or two files out of every 50 or so) they don't escape any special characters (mostly &, but sometimes other random invalid things too). This creates and issue because SimpleXML with php just fails, and I don't really know of any good way to handle parsing invalid XML. My first idea was to preprocess the XML as a string and put ALL fields in as CDATA so it would work, but for some ungodly reason the XML I need to process puts all of its data in the attribute fields. Thus I can't use the CDATA idea. An example of the XML being: <Author v="By Someone & Someone" /> Whats the best way to process this to replace all the invalid characters from the XML before I load it in with SimpleXML?

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • Prevent SQL injection from form-generated SQL.

    - by Markos Fragkakis
    Hi all, I have a search table where user will be able to filter results with a filter of the type: Field [Name], Value [John], Remove Rule Field [Surname], Value [Blake], Remove Rule Field [Has Children], Value [Yes], Remove Rule Add Rule So the user will be able to set an arbitrary set of filters, which will result essentially in a completely dynamic WHERE clause. In the future I will also have to implement more complicated logical expressions, like Where (name=John OR name=Nick) AND (surname=Blake OR surname=Bourne), Of all 10 fields the user may or may not filter by, I don't know how many and which filters the user will set. So, I cannot use a prepared statement (which assumes that at least we know the fields in the WHERE clause). This is why prepared statements are unfortunately out of the question, I have to do it with plain old, generated SQL. What measures can I take to protect the application from SQL Injection (REGEX-wise or any other way)?

    Read the article

  • Why does the .NET tab in the 'Add Reference' dialog in Visual Studio not list the contents of the GA

    - by abroun
    Duplicate of: Getting assemblies to show in the .NET tab of Add Reference So, I'm using Visual C# 2008 Express Edition and I've just been on a bit of a detour as I found out that my assumption that the .NET tab of the 'Add Reference' dialog lists the contents of the GAC was incorrect. This was a bit of a problem for me as the assembly that I wanted to reference from my project was only available in the GAC. (It was Microsoft.XNA.Framework v2.0 obtained from the XNA 2.0 redistributalbe and as far as I could see it installed only into the GAC). I worked round the problem by setting the reference to Microsoft.XNA.Framework manually in the .csproj file and then getting a copy of the dll out of the cache. I was then able to create a directory for the DLL, add it to Visual Studio's list of assembly directories in the registry and then voila! I could see it in the .NET tab. This all seems like a bit of a faff to me and I don't think that my initial assumption (that the .NET tab shows the contents of the GAC) was that unreasonable or would be that uncommon. Can someone who knows more than me tell me why the contents of the GAC aren't shown? The documentation just says that they are not, but is there a good reason? is there actually a way to get the entire contents of the GAC to be listed? A tick box I've missed somewhere? Any info much appreciated.

    Read the article

  • IDL in ATL/COM: Can I publish a const of a complex type?

    - by Ptah- Opener of the Mouth
    I know how to publish a const of a simple type in IDL, for example: const long blah = 37 But I want to publish consts of complex types, with methods, or at least readable struct-like member fields. For example, perhaps a type called CarType, which has accessor fields like "get_Make", "get_Model", "get_Year", "get_BasePrice", et cetera. Then I would like to publish const instances, such as FORD_PINTO_1973. (Please don't read too much into the example, to tell me that this particular example would lend itself better to just regular classes without const instances or something like that). I have no idea how I would define, in IDL, the fact that FORD_PINTO_1973 has a Year field of 1973. Thanks in advance for any help.

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

  • Django : json serialize a queryset which uses defer() or only()

    - by PlanetUnknown
    Now I've been using json serializer and it works great. I had to modify my queries where I started using the only() & defer() filters, like so - retObj = OBJModel.objects.defer("create_dt").filter(loged_in_dt__gte=dtStart) After I've done the above, suddenly the json serializer is returning empty fields - {"pk": 19047, "model": "OBJModel_deferred_create_dt", "fields": {}} If I remove the defer(), the serializer gives all the data correctly. import json from django.utils import simplejson from django.core import serializers json_serializer = serializers.get_serializer("json")() retObj = OBJModel.objects.defer("create_dt").filter(loged_in_dt__gte=dtStart) json_serializer.serialize(retObj, ensure_ascii=False) I've scratched my head on this for a while now. Any insight would be great. NOTE : I am using django 1.1

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • UITextFields losing values after UINavigationController activity

    - by Dan Ray
    This is going to be hard to demonstrate in code, but maybe you can picture it with me. I have a view that contains two UITextFields, "title" and "descr". That same view contains two UIButtons that push another controller onto the navController to get more detail from the user about the object we're assembling and ultimately uploading to my server. It appears that pushing another view on, doing something, and popping it back off results in the two UITextFields keeping their content VISUALLY, but the .text property of those fields becomes NULL. I've confirmed that if I do my two push-pop fields before filling in those UITextFields, I get my data when I upload, and if I do them in the opposite order, I don't. It LOOKS like there's data there, but I get nothing when I NSLog their .text properties. Is this normal? Do I need to just design around this? Or is this as weird as it seems, and I should be looking deeper at causes of this?

    Read the article

  • Using one sqlconnection/sqlcommand through 2 DB-bound methods

    - by dotnetdev
    I have a class with a method which gets data from a database through a SELECT stored proc. This method uses a SqlDbReader by returning ExecuteReader() on a SqlCommand. The connection and everything is made in this method, with fields (such as connection string) set as class level fields. I now need to populate an array based on the results of this query (so the columns of each row will go into the array with its own index). However, this query will not select all of the data from one table which is involved. I can write the queries to get what I need, but how can I use one connection throughout the class? If I instantiate the connection object and call Open() in the constructor, I get an exception @ runtime. I'm hoping for something like this: // At class level: sqlconn.Open(); // sqlcommand set up Method() { // Fire stored proc // Insert results in a collection } Method2() { // Pass same collection in (use same one), // Add new row columns into same collection } How can I do this with strict performance in mind? Thanks

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • Spring @Autowired and WebApplicationContext in Tomcat

    - by EugeneP
    @Autowired works only once. What to do to make it wire the bean every time the Servlet is recreated? My web-app (Tomcat6 container) consists of 2 Servlets. Every servlet has private fields. Their setters are marked with @Autowired In the init method I use WebApplicationContextUtils ... autowireBean(this); It autowires the properties marked with @Autowired once - during the initialization of the Servlet. Any other session will see these fields values, they will not be rewired after the previous session is destroyed. What to do to make them rewire them each time a Servlet constructor is called? a) Put the autowiring into the constructor? Or better 2) get a web app context and extract a bean from there?

    Read the article

  • Select from multiple tables, remove duplicates

    - by staze
    I have two tables in a SQLite DB, and both have the following fields: idnumber, firstname, middlename, lastname, email, login One table has all of these populated, the other doesn't have the idnumber, or middle name populated. I'd LIKE to be able to do something like: select idnumber, firstname, middlename, lastname, email, login from users1,users2 group by login; But I get an "ambiguous" error. Doing something like: select idnumber, firstname, middlename, lastname, email, login from users1 union select idnumber, firstname, middlename, lastname, email, login from users2; LOOKS like it works, but I see duplicates. my understanding is that union shouldn't allow duplicates, but maybe they're not real duplicates since the second user table doesn't have all the fields populated (e.g. "20, bob, alan, smith, [email protected], bob" is not the same as "NULL, bob, NULL, smith, [email protected], bob"). Any ideas? What am I missing? All I want to do is dedupe based on "login". Thanks!

    Read the article

  • Global find object references in NHibernate

    - by Miral
    Is it possible to perform a global reversed-find on NHibernate-managed objects? Specifically, I have a persistent class called "Io". There are a huge number of fields across multiple tables which can potentially contain an object of that type. Is there a way (given a specific instance of an Io object), to retrieve a list of objects (of any type) that actually do reference that specific object? (Bonus points if it can identify which specific fields actually contain the reference, but that's not critical.) Since the NHibernate mappings define all the links (and the underlying database has corresponding foreign key links), there ought to be some way to do it.

    Read the article

  • Integration of Photoshop and MySQL

    - by NewToProgrammingWorld
    I’m working to integrate Photoshop (CS4, local) with MySQL (Linux, remote), such that: a) image file information, data and meta data can be exported from PS into MySQL where same can be manipulated both manually and calculated, and then b) the MySQL fields can be referenced in Photoshop scripting for further automated image file manipulation, and c) the MySQL fields can be referenced for other purposes such as dynamic propagation of files and associated file data to a website, for example. I’ve spent many hours running searches for these concepts on StackOverflow, Google, and Adobe Support. I’m coming up empty, which leads me to believe that it’s not natively possible, and that I’ll need some middle language like PHP. Does anyone know if it’s possible for Photoshop and MySQL to bi-directionally share data? If so, how? If that’s not possible, what solution(s) might you recommend, in terms of specific tools/technologies that I can research further. Thank you.

    Read the article

  • Insert rows into MySQL table while changing one value

    - by Jonathan
    Hey all- I have a MySQL table that defines values for a specific customer type. | CustomerType | Key Field 1 | Key Field 2 | Value | Each customer type has 10 values associated with it (based on the other two key fields). I am creating a new customer type (TypeB) that is exactly the same as another customer type (TypeA). I want to insert "TypeB" as the CustomerType but then just copy the values from TypeA's rows for the other three fields. Is there an SQL insert statement to make this happen? Somthing like: insert into customers(customer_type, key1, key2, value) values "TypeB" union select key1, key2, value from customers where customer_type = "TypeA" Thanks- Jonathan

    Read the article

  • Use SharePoint Search to crawl Project Server project metadata?

    - by Kit Menke
    Our environment consists of Project Server 2007 and MOSS 2007. We have around 750 projects and lots of "Enterprise Custom Fields" set up to track all of the metadata associated with a project. Our main requirement is to be able to search/filter/group/sort all of these projects by metadata in SharePoint. Our current process involves syncing this custom metadata into a SharePoint list (which requires a LOT of maintenance). Question: Is it possible to leverage SharePoint search to crawl/index these metadata fields in Project Server? How would I go about setting this up?

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [[email protected]][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [[email protected]][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [[email protected]][~/dev/study/java/mongodb]% javac test1.java [[email protected]][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • Hi how to show the results in a datatable while we are using yui

    - by udaya
    Hi I am using yui to display a datagrid ... <?php $host = "localhost"; //database location $user = "root"; //database username $pass = ""; //database password $db_name = "cms"; //database name //database connection $link = mysql_connect($host, $user, $pass); mysql_select_db($db_name); //sets encoding to utf8 $result = mysql_query("select dStud_id,dMarkObtained1,dMarkObtained2,dMarkObtained3,dMarkTotal from tbl_internalmarkallot"); //$res = mysql_fetch_array($result); while($res = mysql_fetch_array($result)) { //print_r($res); $JsonVar = json_encode($res); echo "<input type='text' name='json' id='json' value ='$JsonVar'>"; } //print_r (mysql_fetch_array($result)); //echo "<input type='text' name='json' id='json' value ='$JsonVar'>"; ?> <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <title>Client-side Pagination</title> <style type="text/css"> body { margin:0; padding:0; } </style> <link rel="stylesheet" type="text/css" href="build/fonts/fonts-min.css" /> <link rel="stylesheet" type="text/css" href="build/paginator/assets/skins/sam/paginator.css" /> <link rel="stylesheet" type="text/css" href="build/datatable/assets/skins/sam/datatable.css" /> <script type="text/javascript" src="build/yahoo-dom-event/yahoo-dom-event.js"></script> <script type="text/javascript" src="build/connection/connection-min.js"></script> <script type="text/javascript" src="build/json/json-min.js"></script> <script type="text/javascript" src="build/element/element-min.js"></script> <script type="text/javascript" src="build/paginator/paginator-min.js"></script> <script type="text/javascript" src="build/datasource/datasource-min.js"></script> <script type="text/javascript" src="build/datatable/datatable-min.js"></script> <script type="text/javascript" src="YuiJs.js"></script> <style type="text/css"> #paginated { text-align: center; } #paginated table { margin-left:auto; margin-right:auto; } #paginated, #paginated .yui-dt-loading { text-align: center; background-color: transparent; } </style> </head> <body class="yui-skin-sam" onload="ProjectDatatable(document.getElementById('json').value);"> <h1>Client-side Pagination</h1> <div class="exampleIntro"> </div> <input type="hidden" id="HfId"/> <div id="paginated"> </div> <script type="text/javascript"> /*YAHOO.util.Event.onDOMReady(function() { YAHOO.example.ClientPagination = function() { var myColumnDefs = [ {key:"dStud_id", label:"ID",sortable:true, resizeable:true, editor: new YAHOO.widget.TextareaCellEditor()}, {key:"dMarkObtained1", label:"Name",sortable:true}, {key:"dMarkObtained2", label:"CycleTest1"}, {key:"dMarkObtained3", label:"CycleTest2"}, {key:"dMarkTotal", label:"CycleTest3"}, ]; var myDataSource = new YAHOO.util.DataSource("assets/php/json_proxy.php?"); myDataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; myDataSource.responseSchema = { resultsList: "records", fields: ["dStud_id","dMarkObtained1","dMarkObtained2","dMarkObtained3","dMarkTotal"] }; var oConfigs = { paginator: new YAHOO.widget.Paginator({ rowsPerPage: 15 }), initialRequest: "results=504" }; var myDataTable = new YAHOO.widget.DataTable("paginated", myColumnDefs, myDataSource, oConfigs); return { oDS: myDataSource, oDT: myDataTable }; }(); });*/ </script> <?php echo "m".$res['dMarkObtained1']; echo "m".$res['dMarkObtained2']; echo "m".$res['dMarkObtained3']; echo "Tm".$res['dMarkTotal']; {?><? }?> </body> </html> </body> </html> This is my page where i am fetching the data's from the database function generateDatatable(target, jsonObj, myColumnDefs, hfId) { var root; for (key in jsonObj) { root = key; break; } var rootId = "id"; if (jsonObj[root].length > 0) { for (key in jsonObj[root][0]) { rootId = key; break; } } YAHOO.example.DynamicData = function() { var myPaginator = new YAHOO.widget.Paginator({ rowsPerPage: 10, template: YAHOO.widget.Paginator.TEMPLATE_ROWS_PER_PAGE, rowsPerPageOptions: [5, 25, 50, 100], pageLinks: 10 }); // DataSource instance var myDataSource = new YAHOO.util.DataSource(jsonObj); myDataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; myDataSource.responseSchema = { resultsList: root, fields: new Array() }; myDataSource.responseSchema.fields[0] = rootId; for (var i = 0; i < myColumnDefs.length; i++) { myDataSource.responseSchema.fields[i + 1] = myColumnDefs[i].key; } // DataTable configuration var myConfigs = { sortedBy: { key: myDataSource.responseSchema.fields[1], dir: YAHOO.widget.DataTable.CLASS_ASC }, // Sets UI initial sort arrow paginator: myPaginator }; // DataTable instance var myDataTable = new YAHOO.widget.DataTable(target, myColumnDefs, myDataSource, myConfigs); myDataTable.subscribe("rowMouseoverEvent", myDataTable.onEventHighlightRow); myDataTable.subscribe("rowMouseoutEvent", myDataTable.onEventUnhighlightRow); myDataTable.subscribe("rowClickEvent", myDataTable.onEventSelectRow); myDataTable.subscribe("checkboxClickEvent", function(oArgs) { var hidObj = document.getElementById(hfId); var elCheckbox = oArgs.target; var oRecord = this.getRecord(elCheckbox); var id = oRecord.getData(rootId); if (elCheckbox.checked) { if (hidObj.value == "") { hidObj.value = id; } else { hidObj.value += "," + id; } } else { hidObj.value = removeIdFromArray("" + hfId, id); } }); myPaginator.subscribe("changeRequest", function() { if (document.getElementById(hfId).value != "") { /*if (document.getElementById("ConfirmationPanel").style.display == 'block') { document.getElementById("ConfirmationPanel").style.display = 'none'; }*/ document.getElementById(hfId).value = ""; } return true; }); myDataTable.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; } return { ds: myDataSource, dt: myDataTable }; } (); } function removeIdFromArray(values, id) { values = document.getElementById(values).value; if (values.indexOf(',') == 0) { values = values.substring(1); } if (values.indexOf(values.length - 1) == ",") { values = values.substring(0, values.length - 1); } var ids = values.split(','); var rtnValue = ""; for (var i = 0; i < ids.length; i++) { if (ids[i] != id) { rtnValue += "," + ids[i]; } } if (rtnValue.indexOf(",") == 0) { rtnValue = rtnValue.substring(1); } return rtnValue; } function edityuitable() { var ErrorDiv = document.getElementById("ErrorDiv"); var editId=document.getElementById("ctl00_ContentPlaceHolder1_HfId").value; if(editId.length == 0) { ErrorDiv.innerHTML = getErrorMsgStyle("Select a row for edit"); //alert("Select a row for edit"); return false; } else { var editarray = editId.split(","); if (editarray.length != 1) { ErrorDiv.innerHTML = getErrorMsgStyle("Select One row for edit"); //alert("Select One row for edit"); return false; } else if (editarray.length == 1) { return true; } } } function Deleteyuitable() { var ErrorDiv = document.getElementById("ErrorDiv"); var editId=document.getElementById("ctl00_ContentPlaceHolder1_HfId").value; if(editId.length == 0) { ErrorDiv.innerHTML = getErrorMsgStyle("Select a row for Delete"); return false; } else { return true; } } function ProjectDatatable(HfJsonValue){ alert(HfJsonValue); var myColumnDefs = [ {key:"dStud_id", label:"ID", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained1", label:"Marks", width:200, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained2", label:"Marks1", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained3", label:"Marks2", width:200, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkTotal", label:"Total", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"", formatter:"checkbox"} ]; var jsonObj=eval('(' + HfJsonValue + ')'); var target = "paginated"; var hfId = "HfId"; generateDatatable(target,jsonObj,myColumnDefs,hfId) } // JavaScript Document This is my script page when i load the page i do get the first row from the database but the consequtive data's are not displayed in the alert box how can i receieve the data's in the datagrid

    Read the article

  • How to import text file to table with primary key as auto-increment

    - by webworm
    I have some bulk data in a text file that I need to import into a MySQL table. The table consists of two fields .. ID (integer with auto-increment) Name (varchar) The text file is a large collection of names with one name per line ... (example) John Doe Alex Smith Bob Denver I know how to import a text file via phpMyAdmin however, as far as I understand, I need to import data that has the same number of fields as the target table. Is there a way to import the data from my text file into one field and have the ID field auto-increment automatically? Thank you in advance for any help.

    Read the article

  • How do I distinguish a SharePoint file update as being from a file upload?

    - by ccomet
    In SharePoint, when an item is first added to a document library, it fires the ItemAdded and ItemAdding events as expected. And if you upload the same filename to update the existing file, it will fire off ItemUpdated and ItemUpdating events, likewise as expected. However, I have been unsuccessful at determining whether this kind of action has actually occurred or not. Or, more specifically, I am entirely unable to differentiate between the following: An item is updated because someone uploaded a new file but did nothing to any of the form fields. An item is updated because someone hit "OK" but did nothing to any of the form fields. Is there actually a way to distinguish these kinds of updates? There appears to be nothing in event properties that contains this information, nor in the version history for both the list item and the file itself. I have even tried comparing the files themselves via OpenBinary(), but in both mentioned cases I still get the same result. Thank you in advance for any help!

    Read the article

  • why unsigned int 0xFFFFFFFF is equal to int -1?

    - by conejoroy
    perhaps it's a very stupid question but I'm having a hard time figuring this out =) in C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see http://stackoverflow.com/questions/1420982/invalid-value-for-sizet Why?? I'm confused.. I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold. then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF? I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why. thanks!

    Read the article

  • Ruby on Rails ActiveRecord: eager loading issue with foreign and primary key

    - by Krishnaswamy Subramanian
    The eager loading on Ruby on Rails is not working properly for the following scenario. First we had a model called marks which has the following fields id, student, subject, mark the student is a string column which has the active directory login value, later on for reporting functionality we introduce another table called user which has the following fields id, ad_name, full_name Now on the Mark model, we have added the belongs to class belongs_to :student_details, :class_name = "User", :foreign_key = "student", :primary_key = "ad_name" and when loading using the ActiveRecord's find method we are passing in the include conditon for eager loading Marks.find(:all, :include = :reserved_user) but when the find is executed, for each and every mark a student select query executed. Is this a known bug in ROR? or am i missing something?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >