Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 818/854 | < Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >

  • jQuery UI dialog + WebKit + HTML response with script

    - by Anthony Koval'
    Once again I am faced with a great problem! :) So, here is the stuff: on the client side, I have a link. By clicking on it, jQuery makes a request to the server, gets response as HTML content, then popups UI dialog with that content. Here is the code of the request-function: function preview(){ $.ajax({ url: "/api/builder/", type: "post", //dataType: "html", data: {"script_tpl": $("#widget_code").text(), "widgets": $.toJSON(mwidgets), "widx": "0"}, success: function(data){ //console.log(data) $("#previewArea").dialog({ bgiframe: true, autoOpen: false, height: 600, width: 600, modal: true, buttons: { "Cancel": function() { $(this).dialog('destroy'); } } }); //console.log(data.toString()); $('#previewArea').attr("innerHTML", data.toString()); $("#previewArea").dialog("open"); }, error: function(){ console.log("shit happens"); } }) } The response (data) is: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript">var smakly_widget_sid = 0 ,widgets = [{"cols": "2","rows": "2","div_id": "smakly_widget","wid": "0","smakly_style": "small_image",}, ] </script> <script type="text/javascript" src="/media/js/smak/smakme.js"></script> </head> <body> preview <div id="smakly_widget" style="width:560px;height:550px"> </div> </body> </html> As you see, there is a script to load: smakme.js, somehow it doesn't execute in WebKit-based browsers (I tried in Safari and Chrome), but in Firefox, Internet Explorer and Opera it works as expected! Here is that script: String.prototype.format = function(){ var pattern = /\{\d+\}/g; var args = arguments; return this.replace(pattern, function(capture){ return args[capture.match(/\d+/)]; }); } var turl = "/widget" var widgetCtrl = new(function(){ this.render_widget = function (w, content){ $("#" + w.div_id).append(content); } this.build_widgets = function(){ for (var widx in widgets){ var w = widgets[widx], iurl = '{0}?sid={1}&wid={2}&w={3}&h={4}&referer=http://ya.ru&thrash={5}'.format( turl, smakly_widget_sid, w.wid, w.cols, w.rows, Math.floor(Math.random()*1000).toString()), content = $('<iframe src="{0}" width="100%" height="100%"></iframe>'.format(iurl)); this.render_widget(w, content); } } }) $(document).ready(function(){ widgetCtrl.build_widgets(); }) Is that some security issue, or anything else?

    Read the article

  • how to use window.onload?

    - by Patrick
    I'm refactoring a website using MVC. What was a set of huge pages with javascript, php, html etc etc is becoming a series of controllers and views. I'm trying to do it in a modular way so views are split in 'modules' that I can reuse in other pages when needed eg. "view/searchform displays only one div with the searchform "view/display_events displays a list of events and so on. One of the old pages was supposed to load a google map with a marker on it. Amongst the rest of the code, I can identify the relevant bits as follows <head> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=blablabla" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ function load() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); var point = new GLatLng(<?php echo ($info->lat && $info->lng) ? $info->lat .",". $info->lng : "51.502759,-0.126171"; ?>); map.setCenter(new GLatLng(<?php echo ($info->lat && $info->lng) ? $info->lat .",". $info->lng : "51.502759,-0.126171"; ?>), 15); map.addControl(new GLargeMapControl()); map.addControl(new GScaleControl()); map.addOverlay(new GMarker(point)); var marker = createMarker(point,GIcon(),"CIAO"); map.addOverlay(marker); } } //]]> </script> </head> ...then <body onload="load()" onunload="GUnload()"> ...and finally this div where the map should be displayed <div id="map" style="width: 440px; height: 300px"> </div> Don't know much about js, but my understanding is that a) I have to include the scripts in the view module I'm writing (directly in the HTML? I would prefer to load a separate script) b) I have to trigger that function using the equivalent of body onload... (obviously there's no body tag in my view. In my ignorance I've tried div onload=.... but didn't seem to be working :) What do you suggest I do? I've read about window.onload but don't know what's the correct syntax for that. please keep in mind that other parts of the page include other js functions (eg, google adsense) that are called after the footer.

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • Why are these two sql statements deadlocking? (Deadlock graph + details included).

    - by Pure.Krome
    Hi folks, I've got the following deadlock graph that describes two sql statements that are deadlocking each other. I'm just not sure how to analyse this and then fix up my sql code to prevent this from happening. Main deadlock graph Click here for a bigger image. Left side, details Click here for a bigger image. Right side, details Click here for a bigger image. What is the code doing? I'm reading in a number of files (eg. lets say 3, for this example). Each file contains different data BUT the same type of data. I then insert data into LogEntries table and then (if required) I insert or delete something from the ConnectedClients table. Here's my sql code. using (TransactionScope transactionScope = new TransactionScope()) { _logEntryRepository.InsertOrUpdate(logEntry); // Now, if this log entry was a NewConnection or an LostConnection, then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert(new ConnectedClient { LogEntryId = logEntry.LogEntryId }); } // A (PB) BanKick does _NOT_ register a lost connection .. so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete(logEntry.ClientName, logEntry.ClientIpAndPort); } _unitOfWork.Commit(); transactionScope.Complete(); } Now each file has it's own UnitOfWork instance (which means it has it's own database connection, transaction and repository context). So i'm assuming this means there's 3 different connections to the db all happening at the same time. Finally, this is using Entity Framework as the repository, but please don't let that stop you from having a think about this problem. Using a profiling tool, the Isolation Level is Serializable. I've also tried ReadCommited and ReadUncommited, but they both error :- ReadCommited: same as above. Deadlock. ReadUncommited: different error. EF exception that says it expected some result back, but got nothing. I'm guessing this is the LogEntryId Identity (scope_identity) value that is expected but not retrieve because of the dirty read. Please help! PS. It's Sql Server 2008, btw.

    Read the article

  • Why does DebugActiveProcessStop crash my debugging app?

    - by SparkyNZ
    I have a debugging program which I've written to attach to a process and create a crash dump file. That part works fine. The problem I have is that when the debugger program terminates, so does the program that it was debugging. I did some Googling and found the DebugActiveProcessStop() API call. This didn't show up in my older MSDN documentation as it was only introduced in Windows XP so I've tried loading it dynamicall from Kernel32.dll at runtime. Now my problem is that my debugger program crashes as soon as the _DebugActiveProcessStop() call is made. Can somebody please tell me what I'm doing wrong? typedef BOOL (*DEBUGACTIVEPROCESSSTOP)(DWORD); DEBUGACTIVEPROCESSSTOP _DebugActiveProcessStop; HMODULE hK32 = LoadLibrary( "kernel32.dll" ); if( hK32 ) _DebugActiveProcessStop = (DEBUGACTIVEPROCESSSTOP) GetProcAddress( hK32,"DebugActiveProcessStop" ); else { printf( "Can't load Kernel32.dll\n" ); return; } if( ! _DebugActiveProcessStop ) { printf( "Can't find DebugActiveProcessStop\n" ); return; } ... void DebugLoop( void ) { DEBUG_EVENT de; while( 1 ) { WaitForDebugEvent( &de, INFINITE ); switch( de.dwDebugEventCode ) { case CREATE_PROCESS_DEBUG_EVENT: hProcess = de.u.CreateProcessInfo.hProcess; break; case EXCEPTION_DEBUG_EVENT: // PDS: I want a crash dump immediately! dwProcessId = de.dwProcessId; dwThreadId = de.dwThreadId; WriteCrashDump( &de.u.Exception ); return; case CREATE_THREAD_DEBUG_EVENT: case OUTPUT_DEBUG_STRING_EVENT: case EXIT_THREAD_DEBUG_EVENT: case EXIT_PROCESS_DEBUG_EVENT : case LOAD_DLL_DEBUG_EVENT: case UNLOAD_DLL_DEBUG_EVENT: case RIP_EVENT: default: break; } ContinueDebugEvent( de.dwProcessId, de.dwThreadId, DBG_CONTINUE ); } } ... void main( void ) { ... BOOL bo = DebugActiveProcess( dwProcessId ); if( bo == 0 ) printf( "DebugActiveProcess failed, GetLastError: %u \n",GetLastError() ); hProcess = OpenProcess( PROCESS_ALL_ACCESS, TRUE, dwProcessId ); if( hProcess == NULL ) printf( "OpenProcess failed, GetLastError: %u \n",GetLastError() ); DebugLoop(); _DebugActiveProcessStop( dwProcessId ); CloseHandle( hProcess ); }

    Read the article

  • get renamed file names of multiple upload form [js array] in codeigniter

    - by artmania
    Hi friends, I use codeigniter. I have a multiple image upload form. The code below is working well for uploading, but I also need to save file names to database. How can I get the names in here? I spent hours & hours :/ but could not sort it :/ Appreciate helps!!! uploadform.php echo form_open_multipart('gallery/upload'); <input type="file" name="photo" size="50" /> <input type="file" name="thumb" size="50" /> <input type="submit" value="Upload" /> </form> I have a controller between form view and model load model (of course : )) but didnt post here because of no need. gallery_model.php function multiple_upload($upload_dir = 'uploads/', $config = array()) { /* Upload */ $CI =& get_instance(); $files = array(); if(empty($config)) { $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = 'gif|jpg|jpeg|jpe|png'; $config['max_size'] = '2048'; } $CI->load->library('upload', $config); $errors = FALSE; foreach($_FILES as $key => $value) { if( ! empty($value['name'])) { if( ! $CI->upload->do_upload($key)) { $data['upload_message'] = $CI->upload->display_errors(ERR_OPEN, ERR_CLOSE); // ERR_OPEN and ERR_CLOSE are error delimiters defined in a config file $CI->load->vars($data); $errors = TRUE; } else { // Build a file array from all uploaded files $files[] = $CI->upload->data(); } } } // There was errors, we have to delete the uploaded files if($errors) { foreach($files as $key => $file) { @unlink($file['full_path']); } } elseif(empty($files) AND empty($data['upload_message'])) { $CI->lang->load('upload'); $data['upload_message'] = ERR_OPEN.$CI->lang->line('upload_no_file_selected').ERR_CLOSE; $CI->load->vars($data); } else { return $files; } /* ------------------------------- Insert to database */ // problem is here, i need file names to add db. // if there is already same names file at the folder, it rename file itself. so in such case, I need renamed file name :/ } }

    Read the article

  • Connecting form to database errors

    - by Russell Ehrnsberger
    Hello I am trying to connect a page to a MySQL database for newsletter signup. I have the database with 3 fields, id, name, email. The database is named newsletter and the table is named newsletter. Everything seems to be fine but I am getting this error Notice: Undefined index: Name in C:\wamp\www\insert.php on line 12 Notice: Undefined index: Name in C:\wamp\www\insert.php on line 13 Here is my form code. <form action="insert.php" method="post"> <input type="text" value="Name" name="Name" id="Name" class="txtfield" onblur="javascript:if(this.value==''){this.value=this.defaultValue;}" onfocus="javascript:if(this.value==this.defaultValue){this.value='';}" /> <input type="text" value="Enter Email Address" name="Email" id="Email" class="txtfield" onblur="javascript:if(this.value==''){this.value=this.defaultValue;}" onfocus="javascript:if(this.value==this.defaultValue){this.value='';}" /> <input type="submit" value="" class="button" /> </form> Here is my insert.php file. <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="newsletter"; // Database name $tbl_name="newsletter"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Get values from form $name=$_POST['Name']; $email=$_POST['Email']; // Insert data into mysql $sql="INSERT INTO $tbl_name(name, email)VALUES('$name', '$email')"; $result=mysql_query($sql); // if successfully insert data into database, displays message "Successful". if($result){ echo "Successful"; echo "<BR>"; echo "<a href='index.html'>Back to main page</a>"; } else { echo "ERROR"; } ?> <?php // close connection mysql_close(); ?>

    Read the article

  • how to set a polyline on google maps every time when i click twice(make two markers) on maps,

    - by zjm1126
    this is my code : thanks <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.wapforum.org/DTD/xhtml-mobile10.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=5.0,user-scalable=yes"> </head> <body onload="initialize()" onunload="GUnload()"> <style type="text/css"> </style> <div id="map_canvas" style="width: 500px; height: 300px;float:left;"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=ABQIAAAA-7cuV3vqp7w6zUNiN_F4uBRi_j0U6kJrkFvY4-OX2XYmEAa76BSNz0ifabgugotzJgrxyodPDmheRA&sensor=false"type="text/javascript"></script> <script type="text/javascript"> //********** function initialize() { if (GBrowserIsCompatible()){ //var map = new GMap2(document.getElementById("map_canvas")); //map.setCenter(new GLatLng(39.9493, 116.3975), 13); var map = new GMap2(document.getElementById("map_canvas")); var center=new GLatLng(39.917,116.397); map.setCenter(center, 13); map.addOverlay(new GMarker(new GLatLng(39.917,116.397))); map.enableDrawing() //GEvent.addListener(map, "mouseover", function() { //alert("???????"); //}); var one; aFn=function(y_scale,x_scale){ //************ //function p(){ var bounds = map.getBounds(); var southWest = bounds.getSouthWest(); var northEast = bounds.getNorthEast(); var lngSpan = northEast.lng() - southWest.lng(); var latSpan = northEast.lat() - southWest.lat(); var point = new GLatLng(southWest.lat() + latSpan * (1-y_scale), southWest.lng() + lngSpan * x_scale); if(!one){ map.addOverlay(new GMarker(point)); one=point; }else{ var polyline = new GPolyline([one,point], "#ff0000", 5); map.addOverlay(polyline); one=0; } } //********** //************* } } </script> </body> </html>

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • Upgrading Entity Framework 1.0 to 4.0 to include foreign keys

    - by duthiega
    Currently I've been working with Entity Framework 1.0 which is located under a service façade. Below is one of the save methods I've created to either update or insert the device in question. This currently works but, I can't help feel that its a bit of a hack having to set the referenced properties to null then re-attach them just to get an insert to work. The changedDevice already holds these values, so why do I need to assign them again. So, I thought I'll update the model to EF4. That way I can just directly access the foreign keys. However, on doing this I've found that there doesn't seem to be an easy way to add the foreign keys except by removing the entity from the diagram and re-adding it. I don't want to do this as I've already been through all the entity properties renaming them from the DB column names. Can anyone help? /// <summary> /// Saves the non network device. /// </summary> /// <param name="nonNetworkDeviceDto">The non network device dto.</param> public void SaveNonNetworkDevice(NonNetworkDeviceDto nonNetworkDeviceDto) { using (var context = new AssetNetworkEntities2()) { var changedDevice = TransformationHelper.ConvertNonNetworkDeviceDtoToEntity(nonNetworkDeviceDto); if (!nonNetworkDeviceDto.DeviceId.Equals(-1)) { var originalDevice = context.NonNetworkDevices.Include("Status").Include("NonNetworkType").FirstOrDefault( d => d.DeviceId.Equals(nonNetworkDeviceDto.DeviceId)); context.ApplyAllReferencedPropertyChanges(originalDevice, changedDevice); context.ApplyCurrentValues(originalDevice.EntityKey.EntitySetName, changedDevice); } else { var maxNetworkDevice = context.NonNetworkDevices.OrderBy("it.DeviceId DESC").First(); changedDevice.DeviceId = maxNetworkDevice.DeviceId + 1; var status = changedDevice.Status; var nonNetworkType = changedDevice.NonNetworkType; changedDevice.Status = null; changedDevice.NonNetworkType = null; context.AttachTo("DeviceStatuses", status); if (nonNetworkType != null) { context.AttachTo("NonNetworkTypes", nonNetworkType); } changedDevice.Status = status; changedDevice.NonNetworkType = nonNetworkType; context.AddToNonNetworkDevices(changedDevice); } context.SaveChanges(); } }

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • jar dependencies in android- no class definition found exception

    - by Dave.B
    I'm trying to use the gdata java client library on android and have managed a decent hack to get it working. However because the jar for gdata had some package discrepancies with android I had to import the source into my project. This source is dependent on the JavaMail API and the JavaBeans Activation Framework as specified here. My issue is that the JavaMail jar throws a class definition not found when seeking a class which is in the Activation Framework jar. A stack trace is listed below. I am working in Eclipse and have both jars in a lib folder and added to my build path. I'm not very experienced dealing with jars in a situation like this so any help or insight would be appreciated. 03-29 09:55:26.204: ERROR/AndroidRuntime(331): Uncaught handler: thread AsyncTask #3 exiting due to uncaught exception 03-29 09:55:26.215: ERROR/AndroidRuntime(331): java.lang.RuntimeException: An error occured while executing doInBackground() 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at android.os.AsyncTask$3.done(AsyncTask.java:200) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.lang.Thread.run(Thread.java:1096) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): Caused by: java.lang.NoClassDefFoundError: javax.activation.DataHandler 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at javax.mail.internet.MimeBodyPart.setContent(MimeBodyPart.java:684) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at com.google.gdata.data.media.MediaBodyPart.<init>(MediaBodyPart.java:95) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at com.google.gdata.data.media.MediaMultipart.<init>(MediaMultipart.java:126) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at com.google.gdata.client.media.MediaService.insert(MediaService.java:382) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at android.os.AsyncTask$2.call(AsyncTask.java:185) 03-29 09:55:26.215: ERROR/AndroidRuntime(331): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)

    Read the article

  • Cannot extend a class located in another file, PHP

    - by NightMICU
    I am trying to set up a class with commonly used tasks, such as preparing strings for input into a database and creating a PDO object. I would like to include this file in other class files and extend those classes to use the common class' code. However, when I place the common class in its own file and include it in the class it will be used in, I receive an error that states the second class cannot be found. For example, if the class name is foo and it is extending bar (the common class, located elsewhere), the error says that foo cannot be found. But if I place the code for class bar in the same file as foo, it works. Here are the classes in question - Common Class abstract class coreFunctions { protected $contentDB; public function __construct() { $this->contentDB = new PDO('mysql:host=localhost;dbname=db', 'username', 'password'); } public function cleanStr($string) { $cleansed = trim($string); $cleansed = stripslashes($cleansed); $cleansed = strip_tags($cleansed); return $cleansed; } } Code from individual class include $_SERVER['DOCUMENT_ROOT'] . '/includes/class.core-functions.php'; $mode = $_POST['mode']; if (isset($mode)) { $gallery = new gallery; switch ($mode) { case 'addAlbum': $gallery->addAlbum($_POST['hash'], $_POST['title'], $_POST['description']); } } class gallery extends coreFunctions { private function directoryPath($string) { $path = trim($string); $path = strtolower($path); $path = preg_replace('/[^ \pL \pN]/', '', $path); $path = preg_replace('[\s+]', '', $path); $path = substr($path, 0, 18); return $path; } public function addAlbum($hash, $title, $description) { $title = $this->cleanStr($title); $description = $this->cleanStr($description); $path = $this->directoryPath($title); if ($title && $description && $hash) { $addAlbum = $this->contentDB->prepare("INSERT INTO gallery_albums (albumHash, albumTitle, albumDescription, albumPath) VALUES (:hash, :title, :description, :path)"); $addAlbum->execute(array('hash' => $hash, 'title' => $title, 'description' => $description, 'path' => $path)); } } } The error when I try it this way is Fatal error: Class 'gallery' not found in /home/opheliad/public_html/admin/photo-gallery/includes/class.admin_photo-gallery.php on line 10

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • populate CoreData data model from JSON files prior to app start

    - by johannes_d
    I am creating an iPad App that displays data I got from an API in JSON format. My Core Data model has several entities(Countries, Events, Talks, ...). For each entity I have one .json file that contains all instances of the entity and its attributes as well as its relationships. I would like to populate my Core Data data model with these entities before the start of the App (otherwise it takes about 15 minutes for the iPad to create all the instances of the entities from the several JSON files using factory methods). I am currently importing the data into CoreData like this: -(void)fetchDataIntoDocument:(UIManagedDocument *)document { dispatch_queue_t dataQ = dispatch_queue_create("Data import", NULL); dispatch_async(dataQ, ^{ //Fetching data from application bundle NSURL *tedxgroupsurl = [[NSBundle mainBundle] URLForResource:@"contries" withExtension:@"json"]; NSURL *tedxeventsurl = [[NSBundle mainBundle] URLForResource:@"events" withExtension:@"json"]; //converting the JSON files to NSDictionaries NSError *error = nil; NSDictionary *countries = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:countriesurl] options:kNilOptions error:&error]; countries = [countries objectForKey:@"countries"]; NSDictionary *events = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:eventsurl] options:kNilOptions error:&error]; events = [events objectForKey:@"events"]; //creating entities using factory methods in NSManagedObject Subclasses (Country / Event) [document.managedObjectContext performBlock:^{ NSLog(@"creating countries"); for (NSDictionary *country in countries) { [Country countryWithCountryInfo:country inManagedObjectContext:document.managedObjectContext]; //creating Country entities } NSLog(@"creating events"); for (NSDictionary *event in events) { [Event eventWithEventInfo:event inManagedObjectContext:document.managedObjectContext]; // creating Event entities } NSLog(@"done creating, saving document"); [document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL]; }]; }); dispatch_release(dataQ); } This combines the different JSON files into one UIManagedDocument which i can then perform fetchRequests on to populate tableViews, mapView, etc. I'm looking for a way to create this document outside my application & add it to the mainBundle. Then I could copy it once to the apps DocumentsDirectory and be able I use it (instead of creating the Document within the app from the original JSON files). Any help is appreciated!

    Read the article

  • L-Soft LISTSERV TCPGUI Interface for PHP Creation

    - by poolnoodl
    I'm trying to use LISTSERV's "API" in PHP. L-Soft calls this TCPGUI, and essentially, you can request data like over Telnet. To do this, I'm using PHP's TCP socket functions. I've seen this done in other languages but can't quite convert it to PHP. I can connect, I can change set ASCII or BINARY mode. But I can never quite craft the header packet the way I need to authenticate, so I'm thinking I'm messing up my conversion. C: http://www.lsoft.com/manuals/16.0/htmlhelp/advanced%20topics/TCPGUI.html#2334328 $origin = '[email protected]'; $pwd = 'password'; $host = "example.com"; $port = 2306; $email = "[email protected]"; $list = "mailinglist"; $command = "Query $list FOR $email"; $fp = stream_socket_client("tcp://$host:$port", $errno, $errstr, 30); $cmd = $command . " PW=" . $pwd; $len = strlen($cmd); $orglen = strlen($origin); $n = $len + $orglen + 1; $headerPacket[0] = "1"; $headerPacket[1] = "B"; $headerPacket[2] = "\r"; $headerPacket[3] = "\n"; $headerPacket[4] = ord($n / 256); $headerPacket[5] = ord($n + 255); $headerPacket[6] = ord($orglen); for ($i = 0; $i < $orglen; $i++) { $headerPacket[$i + 7] = ord($origin[$i]); } for ($i = 0; $i < $len; $i++) { $cmdPacket[$i] = ord($cmd[$i]); } fwrite($fp, implode($headerPacket)); while (!feof($fp)) { echo fgets($fp, 1024); } Any thoughts on where I'm going wrong? I'd much appreciate it if anyone could point me toward some code to do this, days of googling and searching here on SO has only lead me to examples in other languages. Of course, if you know C (or Java or Perl as linked below in my comment to bypass the spam filter), PHP, and socket programming fairly well, you could probably rewrite the whole of the code in an hour, maybe a few minutes. You'd have my eternal thanks for that.

    Read the article

  • Retain a list of objects and pass it to the create/edit view when validation fails in ASP.NET MVC 2

    - by brainnovative
    I am binding a Foreign key property in my model. I am passing a list of possible values for that property in my model. The model looks something like this: public class UserModel { public bool Email { get; set; } public bool Name { get; set; } public RoleModel Role { get; set; } public IList<RoleModel> Roles { get; set; } } public class RoleModel { public string RoleName { get; set; } } This is what I have in the controller: public ActionResult Create() { IList<RoleModel> roles = RoleModel.FromArray(_userService.GetAllRoles()); UserModel model = new UserModel() { Roles = roles }; return View(model); } In the view I have: <div class="editor-label"> <%= Html.LabelFor(model => model.Role) %> </div> <div class="editor-field"> <%= Html.DropDownListFor(model => model.Role, new SelectList(Model.Roles, "RoleName", "RoleName", Model.Role))%> <%= Html.ValidationMessageFor(model => model.Role)%> </div> What do I need to do to get the list of roles back to my controller to pass it again to the view when validation fails. This is what I need: [HttpPost] public ActionResult Create(UserModel model) { if (ModelState.IsValid) { // insert logic here } //the validation fails so I pass the model again to the view for user to update data but model.Roles is null :( return View(model); } As written in the comments above I need to pass the model with the list of roles again to my view but model.Roles is null. Currently I ask the service again for the roles (model.Roles = RoleModel.FromArray(_userService.GetAllRoles());) but I don't want to add an extra overhead of getting the list from DB when I have already done that.. Anyone knows how to do it?

    Read the article

  • how to find which button has been clicked in app widgets in android?

    - by chrish
    I want to design simple app widget which has two textview and two button for previous/next. I am getting difficult to handle button click in app widget. Actually my desire is,if user click on previous button i want to show previous value and if user click on Next button i want to show next value from database. How to know which button is clicked? here i register button click listener like this public static class UpdateWidgetService extends IntentService { public UpdateWidgetService() { super("UpdateWidgetService"); } @Override protected void onHandleIntent(Intent intent) { AppWidgetManager appWidgetManager = AppWidgetManager .getInstance(this); int incomingAppWidgetId = intent.getIntExtra(EXTRA_APPWIDGET_ID, INVALID_APPWIDGET_ID); if (incomingAppWidgetId != INVALID_APPWIDGET_ID) { updateOneAppWidget(appWidgetManager, incomingAppWidgetId); } } private void updateOneAppWidget(AppWidgetManager appWidgetManager, int appWidgetId) { DatabaseManager dbManager = new DatabaseManager(this); dbManager.open(); String contactNumber, date, status, message; ArrayList<QueuedMessage> listOfQuedMessage = (ArrayList<QueuedMessage>) dbManager .fetchContactNumber(); if (listOfQuedMessage.size() == 0) Log.i("Db", "null"); else{ date = dbManager.fetchDate(); message = dbManager.fetchMessage(); status = dbManager.fetchStatus(); dbManager.closeDatabase(); RemoteViews views = new RemoteViews(this.getPackageName(), R.layout.schdulesms_appwidget_layout); views.setTextViewText(R.id.to_appwidget_saved_data, listOfQuedMessage.get(count).contacNumber); views.setTextViewText(R.id.date_appwidget_saved_data, date); views.setTextViewText(R.id.status_appwidget_saved, status); views.setTextViewText(R.id.message_appwidgset_saved_data, message); **//here i want do** if(button1){ btnNxtClick(views, appWidgetId,listOfQuedMessage.size()); }else{ btPrevClick(views, appWidgetId, listOfQuedMessage.size()); } appWidgetManager.updateAppWidget(appWidgetId, views); } } private void btnNxtClick(RemoteViews views, int appWidgetId,int sizeOfList) { Intent btnNextIntent = new Intent(this, this.getClass()); btnNextIntent.putExtra(EXTRA_APPWIDGET_ID, appWidgetId); PendingIntent btnNextPendingIntent = PendingIntent.getService(this, 0, btnNextIntent, PendingIntent.FLAG_UPDATE_CURRENT); views.setOnClickPendingIntent(R.id.btnNext, btnNextPendingIntent); } private void btPrevClick(RemoteViews views, int appWidgetId,int sizeOfList) { Intent btnNextIntent = new Intent(this, this.getClass()); btnNextIntent.putExtra(EXTRA_APPWIDGET_ID, appWidgetId); PendingIntent btnNextPendingIntent = PendingIntent.getService(this, 0, btnNextIntent, PendingIntent.FLAG_UPDATE_CURRENT); views.setOnClickPendingIntent(R.id.btnPrev, btnNextPendingIntent); } } can anyone help me out from this problem?? thanks

    Read the article

  • remove data layer and put into it's own domain

    - by user334768
    I have a SL4 application that uses EF4 & RIA Services. DB is SQL 2008. All is working well. Now I want to put the Database and web services on one domain (A.com) with the web service exposing the same methods available in my working project. (one listed at top of message) Then put a Silverlight application (same one as above) on domain(B.com) and call the web services on A.com. I thought I had a fair understanding of RIA Services. Enough to get the above application working. Now when I say "working" I do mean on my local dev machine. I have yet to deployed as SL4 & .NET 4 application to my hosting site. But I don't think I understand it well enough. I normally create a new business app, add EF then create the RIA DomainService. Add any [Includes] I need, modify my linq queries and run application. And it works. Now I need to break off my data layer and put it on another hosting site (A.com) And put my UI and business logic on another hosting site (B.com) I think I need to do the following : On the Database & web service site: domain(A.com) create application, create EF4, create RIA Services and deploy. At this time, are the methods exposed available as a "WEB SERVICE" to other applications calling by http:// a.com/serviceName.svc address? I think I need to do the following : On the application site : domain(B.com) create a business application (later will need authentication and navigation). How can I create an EF when I don't have access to the database? (I know I do have access but I want know what happens here when I do not have access to the database, but only data provided by a web service) If I can not create an EF how do I create my RIA Service? I hope any one who takes time to help me understands what I'm asking. Sorry so long.

    Read the article

  • mysql: can't set max_allowed_package to anything grater than 16MB

    - by sas
    I'm not sure if this is the right place to post these kind of questions, if it's not so, please (politely) let me know... :-) I need to save files greater than 16MB on a mysql database from a php site... I've already changed the c:\xampp\mysql\bin\my.cnf and set max_allowed_packet to 16 MB, and everything worked fine then I set it to 32 MB but there´s no way I can handle a file bigger than 16 MB I get the following error: 'MySQL server has gone away' (the same error I had when max_allowed_packet was set to 1MB) there must be some other setting that doesn´t allow me to handle files bigger than 16MB maybe the php client, I guess, but I don't know where to edit it this is the code I'm running when file.txt is smaller than 16.776.192 bytes long, it works fine, but if file.txt has 16.777.216 bytes i get the aforementioned error oh, and the field download.content is a longblob... $file = 'file.txt'; $file_handle = fopen( $file, 'r' ); $content = fread( $file_handle, filesize( $file ) ); fclose( $file_handle ); db_execute( 'truncate table download', true ); $sql = "insert into download( code, title, name, description, original_name, mime_type, size, content, user_insert_id, date_insert, user_update_id, date_update ) values ( 'new file', 'new file', 'sas.jpg', 'new file', '$file', 'mime', " . filesize( $file ) . ", '" . addslashes( $content ) . "', 0, " . db_char_to_sql( now_char(), 'datetime' ) . ", 0, " . db_char_to_sql( now_char(), 'datetime' ) . " )"; db_execute( $sql, true ); (the db_execute funcion just opens the connections and executes the sql stuff) running on windows XP sp2 server version: 5.0.67-community PHP Version 4.4.9 mysql client API version: 3.23.49 using: ApacheFriends XAMPP (Basispaket) version 1.6.8 that comes with + Apache 2.2.9 + MySQL 5.0.67 (Community Server) + PHP 5.2.6 + PHP 4.4.9 + PEAR + phpMyAdmin 2.11.9.2 ... this is part of the content of c:\xampp\mysql\bin\my.cnf # The MySQL server [mysqld] port= 3306 socket= "C:/xampp/mysql/mysql.sock" basedir="C:/xampp/mysql" tmpdir="C:/xampp/tmp" datadir="C:/xampp/mysql/data" skip-locking key_buffer = 16M # max_allowed_packet = 1M max_allowed_packet = 32M table_cache = 128 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M

    Read the article

  • Is there a way to make PHP's SplHeap recalculate? (aka: add up-heap to SplHeap?)

    - by md2k7
    I am using an SplHeap to hold graph nodes of a tree with directed edges that will be traversed from the leaves to the root. For this, I precalculate the "fan-in" of nodes and put them into the heap so that I can always retrieve the node with the smallest fan-in (0) from it. After visiting a node, I reduce the fan-in of its successor by 1. Then obviously, the heap needs to be recalculated because the successor is now in the wrong place there. I have tried recoverFromCorruption(), but it doesn't do anything and keeps the heap in the wrong order (node with larger fanIn stays in front of smaller fanIn). As a workaround, I'm now creating a new heap after each visit, amounting to a full O(N*log(N)) sort each time. It should be possible, however, to make up-heap operations on the changed heap entry until it's in the right position in O(log(N)). The API for SplHeap doesn't mention an up-heap (or deletion of an arbitrary element - it could then be re-added). Can I somehow derive a class from SplHeap to do this or do I have to create a pure PHP heap from scratch? EDIT: Code example: class VoteGraph { private $nodes = array(); private function calculateFanIn() { /* ... */ } // ... private function calculateWeights() { $this->calculateFanIn(); $fnodes = new GraphNodeHeap(); // heap by fan-in ascending (leaves are first) foreach($this->nodes as $n) { // omitted: filter loops $fnodes->insert($n); } // traversal from leaves to root while($fnodes->valid()) { $node = $fnodes->extract(); // fetch a leaf from the heap $successor = $this->nodes[$node->successor]; // omitted: actual job of traversal $successor->fanIn--; // will need to fix heap (sift up successor) because of this //$fnodes->recoverFromCorruption(); // doesn't work for what I want // workaround: rebuild $fnodes from scratch $fixedHeap = new GraphNodeHeap(); foreach($fnodes as $e) $fixedHeap->insert($e); $fnodes = $fixedHeap; } } } class GraphNodeHeap extends SplHeap { public function compare($a, $b) { if($a->fanIn === $b->fanIn) return 0; else return $a->fanIn < $b->fanIn ? 1 : -1; } }

    Read the article

  • NHibernate and objects with value-semantics

    - by Groo
    Problem: If I pass a class with value semantics (Equals method overridden) to NHibernate, NHibernate tries to save it to db even though it just saved an entity equal by value (but not by reference) to the database. What am I doing wrong? Here is a simplified example model for my problem: Let's say I have a Person entity and a City entity. One thread (producer) is creating new Person objects which belong to a specific existing City, and another thread (consumer) is saving them to a repository (using NHibernate as DAL). Since there is lot of objects being flushed at a time, I am using Guid.Comb id's to ensure that each insert is made using a single SQL command. City is an object with value-type semantics (equal by name only -- for this example purposes only): public class City : IEquatable<City> { public virtual Guid Id { get; private set; } public virtual string Name { get; set; } public virtual bool Equals(City other) { if (other == null) return false; return this.Name == other.Name; } public override bool Equals(object obj) { return Equals(obj as City); } public override int GetHashCode() { return this.Name.GetHashCode(); } } Fluent NH mapping is something like: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); References(x => x.City) .Cascade.SaveUpdate(); } } public class CityMap : ClassMap<City> { public CityMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); Map(x => x.Name); } } Right now (with my current NHibernate mapping config), my consumer thread maintains a dictionary of cities and replaces their references in incoming person objects (otherwise NHibernate will see a new, non-cached City object and try to save it as well), and I need to do it for every produced Person object. Since I have implemented City class to behave like a value type, I hoped that NHibernate would compare Cities by value and not try to save them each time -- i.e. I would only need to do a lookup once per session and not care about them anymore. Is this possible, and if yes, what am I doing wrong here?

    Read the article

  • Haskell data serialization of some data implementing a common type class

    - by Evan
    Let's start with the following data A = A String deriving Show data B = B String deriving Show class X a where spooge :: a -> Q [ Some implementations of X for A and B ] Now let's say we have custom implementations of show and read, named show' and read' respectively which utilize Show as a serialization mechanism. I want show' and read' to have types show' :: X a => a -> String read' :: X a => String -> a So I can do things like f :: String -> [Q] f d = map (\x -> spooge $ read' x) d Where data could have been [show' (A "foo"), show' (B "bar")] In summary, I wanna serialize stuff of various types which share a common typeclass so I can call their separate implementations on the deserialized stuff automatically. Now, I realize you could write some template haskell which would generate a wrapper type, like data XWrap = AWrap A | BWrap B deriving (Show) and serialize the wrapped type which would guarantee that the type info would be stored with it, and that we'd be able to get ourselves back at least an XWrap... but is there a better way using haskell ninja-ery? EDIT Okay I need to be more application specific. This is an API. Users will define their As, and Bs and fs as they see fit. I don't ever want them hacking through the rest of the code updating their XWraps, or switches or anything. The most i'm willing to compromise is one list somewhere of all the A, B, etc. in some format. Why? Here's the application. A is "Download a file from an FTP server." B is "convert from flac to mp3". A contains username, password, port, etc. information. B contains file path information. A and B are Xs, and Xs shall be called "Tickets." Q is IO (). Spooge is runTicket. I want to read the tickets off into their relevant data types and then write generic code that will runTicket on the stuff read' from the stuff on disk. At some point I have to jam type information into the serialized data.

    Read the article

  • First record does not show in pagination script

    - by whitstone86
    This is my pagination script which extracts info for my TV guide project that I am working on. Currently I've been experimenting with different PHP/MySQL before it becomes a production site. This is my current script: <?php /*********************************** * PhpMyCoder Paginator * * Created By PhpMyCoder * * 2010 PhpMyCoder * * ------------------------------- * * You may use this code as long * * as this notice stays intact and * * the proper credit is given to * * the author. * ***********************************/ // set the default timezone to use. Available since PHP 5.1 putenv("TZ=US/Eastern") ?> <head> <title> Pagination Test - Created By PhpMyCoder</title> <style type="text/css"> #nav { font: normal 13px/14px Arial, Helvetica, sans-serif; margin: 2px 0; } #nav a { background: #EEE; border: 1px solid #DDD; color: #000080; padding: 1px 7px; text-decoration: none; } #nav strong { background: #000080; border: 1px solid #DDD; color: #FFF; font-weight: normal; padding: 1px 7px; } #nav span { background: #FFF; border: 1px solid #DDD; color: #999; padding: 1px 7px; } </style> </head> <?php //Require the file that contains the required classes include("pmcPagination.php"); //PhpMyCoder Paginator $paginator = new pmcPagination(20, "page"); //Connect to the database mysql_connect("localhost","root","PASSWORD"); //Select DB mysql_select_db("mytvguide"); //Select only results for today and future $result = mysql_query("SELECT programme, channel, airdate, expiration, episode, setreminder FROM lagunabeach where expiration >= now() order by airdate, 3 ASC LIMIT 0, 100;"); //You can also add reuslts to paginate here mysql_data_seek($queryresult,0) ; while($row = mysql_fetch_array($result)) { $paginator->add(new paginationData($row['programme'], $row['channel'], $row['airdate'], $row['expiration'], $row['episode'], $row['setreminder'])); } ?> <?php //Show the paginated results $paginator->paginate (); ?><? include("pca-footer1.php"); ?> <?php //Show the navigation $paginator->navigation(); ?> Despite me having two records for the programmes airing today, it only shows records from the second one onwards - the programme that airs at 8:35pm UK time GMT does not show, but the later 11:25pm UK time GMT one does show. How should I fix this? Above is my code if that is of any use! Thanks

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

< Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >