Search Results

Search found 8591 results on 344 pages for 'mod cache'.

Page 312/344 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • mysql to excel genration using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ? the above code is used for genrating mysql to excel sheet but we are getting following error the file youare trying to open, 'users.xls',is in a different format than specified by the file extension. verify that the file is not corrupted and is from a trusted source before opening the file. do you want to open the file now?

    Read the article

  • Ruby on Rails: Uploading a modifed site.

    - by Califer
    I'm having a heck of a time getting a site I modified to work correctly. I didn't set the site up originally, and since the person that set it up no longer works with me I had to learn ruby just to make some changes. I made all the changes in the development server and everything worked fine. Then I did a diff on the production and development and moved only my changes over. Unfortunately when I loaded my changes onto the production server I got a lot of errors. I've changed all of the permissions to 755, which took care being able to access anything at all, but after that I started getting a lot of 500 errors. Nothing showed up in the production.log file. I really have no clue what's going wrong except that perhaps things are not noticing the new changes. I moved the old site to a backup folder, and the new site crashes whenever it goes to anything that I've changed. In particular, I added a link to a new setup with an extra controller/model/view group. It works fine on development but in production it gives me a 404. Yes, I did copy all the files up. I even put everything back how it was, but the website is still showing the broken version of it. I checked the tmp/cache folder but it was empty. Running dispatch.fcgi shows the old site (which I expected) but it still shows the flawed new site when I connect through a browser. I've been tearing my hair out trying to get this to work. Any ideas as to how I can get this to work?

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • How to replace auto-implemented c# get body at runtime or compile time?

    - by qstarin
    I've been trying to figure this out all night, but I guess my knowledge of the .Net Framework just isn't that deep and the problem doesn't exactly Google well, but if I can get a nod in the right direction I'm sure I can implement it, one way or another. I'd like to be able to declare a property decorated with a custom attribute as such: public MyClass { [ReplaceWithExpressionFrom(typeof(SomeOtherClass))] public virtual bool MyProperty { get; } } public SomeOtherClass : IExpressionHolder<MyClass, bool> { ... } public interface IExpressionHolder<TArg, TResult> { Expression<Func<TArg, TResult>> Expression { get; } } And then somehow - this is the part I'm having trouble figuring - replace the automatically generated implementation of that getter with a piece of custom code, something like: Type expressionHolderType = LookupAttributeCtorArgTypeInDeclarationOfPropertyWereReplacing(); return ReplaceWithExpressionFromAttribute.GetCompiledExpressionFrom(expressionHolderType)(this); The main thing I'm not sure how to do is replace the automatic implementation of the get. The first thing that came to mind was PostSharp, but that's a more complicated dependency than I care for. I'd much prefer a way to code it without using post-processing attached to the build (I think that's the jist of how PostSharp sinks its hooks in anyway). The other part of this I'm not so sure about is how to retrieve the type parameter passed to the particular instantiation of the ReplaceWithExpressionFrom attribute (where it decorates the property whose body I want to replace; in other words, how do I get typeof(SomeOtherClass) where I'm coding the get body replacement). I plan to cache compiled expressions from concrete instances of IExpressionHolder, as I don't want to do that every time the property gets retrieved. I figure this has just got to be possible. At the very least I figure I should be able to search an assembly for any method decorated with the attribute and somehow proxy the class or just replace the IL or .. something? And I'd like to make the integration as smooth as possible, so if this can be done without explicitly calling a registration or initialization method somewhere that'd be super great. Thanks!

    Read the article

  • How can I allocate 8 GB ( not 1 GB ) RAM to my JDK on windows

    - by Deepak
    JDK on Windows takes max around 2 GB RAM. Even if we allocate more RAM to our JDK; it doesnt take it. If I need to run a process which need 8 GB RAM on Windows; how can I achieve it ? Do we have any JDK provided by any other provider which could support it ? Memcached provides us additional cache which can be used... but that is not I am looking for. Suppose I need to run my jMeter with 8 GB RAM on my windows box; Memcached wont help for sure.. Is there any provider which provides me with this ? Previosly I thought Terracotta does that; but looks like that also is like Memcached. I am using Windows 7. If needed I can use Windows Server also.. I just need to get it running.

    Read the article

  • forcing a download using PHP / jQuery

    - by Dirty-flow
    I know there are already many questions about forcing a download with PHP, but I can't find what I'm doing wrong and what should I do. I'm having an list with filenames, and I want to download one of them by clicking a button. My jQuery: $(".MappeDownload").on("click",function(e){ e.stopPropagation(); fileId=$(this).val() $.post("ajax/DownloadFile.php",{ id : fileId}) }) and on the server side I have a table with the file names and the file path. $sql = "SELECT vUploadPfad, vUploadOriginname FROM tabUpload WHERE zUploadId='$_POST[id]'"; $result = mysql_query($sql) or die(""); $file = mysql_fetch_array($result); $localfile = $file["vUploadPfad"]; $name=$file["vUploadOriginname"]; $fp = fopen($localfile, 'rb'); header("Cache-Control: "); header("Pragma: "); header("Content-Type: application/octet-stream"); header("Content-Length: " . filesize($localfile)); header("Content-Disposition: attachment; filename='".$name."';"); header("Content-Transfer-Encoding: binary\n"); fpassthru($fp); exit; The AJAX request is successful, I'm getting the right header(filesize, filename etc...) but the download are not starting.

    Read the article

  • Magento products will not show in category

    - by Aaron
    I've recently been tasked with the build and deployment of a large Ecommerce site. In the past we've had to use the clients legacy X-cart installation for redevelopment (too far integrated with their existing work flow). We'd heard good things about Magento, so I've set up a test install to get to grips with it. After a couple of initial issues, there is a live development site which displays categories on the default theme. The problem we've hit now is that products don't display..! After a lot more in-depth research into this, all I've been able to discover is that quite a number of developers endorse using other solutions entirely, with the other 50% saying after the steep learning curve the platform is as wonderful as we'd initially been led to believe. Now, my test category is showing, so I know this is configured properly. I've set up three test products and associated them with this (all done following the Magento user guide), checked double checked and thrice checked the products are enabled and visible individually, yet still the front end says the category has no products in it. I've cleared the cache repeatedly, reset everything possible many times in index management - no products show up. I have to make a call tomorrow morning on whether we're going ahead with Magento. If I can't even get it to show products I'm going to have to go with something with a more established track record and more community support available. Can anybody advise what could possibly be wrong here?

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • byte + byte = int... why?

    - by Robert C. Cartaino
    Looking at this C# code... byte x = 1; byte y = 2; byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte' The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte, so... byte z = (byte)(x + y); // works What I am wondering is why? Is it architectural? Philosophical? We have: int + int = int long + long = long float + float = float double + double = double So why not: byte + byte = byte short + short = short ? A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

    Read the article

  • Jquery Ajax Search on pressing return key in Internet Explorer

    - by haribo83
    Hi I have found the following code to create a search on my site - this works really well the only problem is in Internet Explorer the search doesn't work when you press the return key. Does anybody have any ideas? The search code is below - if anything else is needed please let me know. $(function() { $(".search_button").click(function() { var search_word = $("#search_box").val(); var dataString = 'search_word='+ search_word; if(search_word=='') { } else { $.ajax({ type: "GET", url: "searchdata.php", data: dataString, cache: false, beforeSend: function(html) { document.getElementById("insert_search").innerHTML = ''; $("#flash").show(); $("#searchword").show(); $(".searchword").html(search_word); $("#flash").html('<img src="ajax-loader.gif" /> Loading Results...'); }, success: function(html){ $("#insert_search").show(); $("#insert_search").append(html); $("#flash").hide(); } }); } return false; }); });

    Read the article

  • Wrong label for a nodereference in Drupal content-type

    - by MPD
    We have a content-type built using CCK. One of the fields is a node reference. The node picker is using a view to build the options. A few days ago, everything was working well. Today, it looks like all node reference fields using views to populate the selection options are displaying the wrong label. Every single label in the option is ``A'', but the actual node number is correct. The form actually works, just the labels are incorrect. We have tried just about every combination of edit/save, disable/enable, reboot, clear cache, clone the view, rebuild the view, new view, etc, but we still have a big list of As. If we create a brand new content type with a brand new node reference field, we get the problem. Through some backup/restore exercises, we have determined that the problem is actually in the database and not in the code. We can restore our last good backup, but we will lose a decent amount of work we have put into other parts of the database. We enabled mysql query logging, and the view is actually being called properly, but we cannot track down where the problem is creeping in after that (unraveling the CCK / Views / Drupal plumbing is a challenge). The install was build with latest stable versions as of April. The problems referred to in http://drupal.org/node/624422 is similar, but our code versions include the patches mentioned. Any ideas would be appreciated. Thanks.

    Read the article

  • How do I avoid a race condition in my Rails app?

    - by Cathal
    Hi, I have a really simple Rails application that allows users to register their attendance on a set of courses. The ActiveRecord models are as follows: class Course < ActiveRecord::Base has_many :scheduled_runs ... end class ScheduledRun < ActiveRecord::Base belongs_to :course has_many :attendances has_many :attendees, :through => :attendances ... end class Attendance < ActiveRecord::Base belongs_to :user belongs_to :scheduled_run, :counter_cache => true ... end class User < ActiveRecord::Base has_many :attendances has_many :registered_courses, :through => :attendances, :source => :scheduled_run end A ScheduledRun instance has a finite number of places available, and once the limit is reached, no more attendances can be accepted. def full? attendances_count == capacity end attendances_count is a counter cache column holding the number of attendance associations created for a particular ScheduledRun record. My problem is that I don't fully know the correct way to ensure that a race condition doesn't occur when 1 or more people attempt to register for the last available place on a course at the same time. My Attendance controller looks like this: class AttendancesController < ApplicationController before_filter :load_scheduled_run before_filter :load_user, :only => :create def new @user = User.new end def create unless @user.valid? render :action => 'new' end @attendance = @user.attendances.build(:scheduled_run_id => params[:scheduled_run_id]) if @attendance.save flash[:notice] = "Successfully created attendance." redirect_to root_url else render :action => 'new' end end protected def load_scheduled_run @run = ScheduledRun.find(params[:scheduled_run_id]) end def load_user @user = User.create_new_or_load_existing(params[:user]) end end As you can see, it doesn't take into account where the ScheduledRun instance has already reached capacity. Any help on this would be greatly appreciated.

    Read the article

  • MySQL + Joomla + remote c# access

    - by Jimmy
    Hello, I work on a Joomla web site, installed on a MySQL database and running on IIS7. It's all working fine. I now need to add functionality that lets (Joomla-)registered users change some configuration data. Though I haven't done this yet, it looks straightforward enough to do with Joomla. The data is private so all external access will be done through HTTPS. I also need an existing c# program, running on another machine, to read that configuration data. Sure enough, this data access needs to be as fast as possible. The data will be small (and filtered by query), but the latency should be kept to a minimum. A short-term, client-side cache (less than a minute, in case a user updates his configuration data) seems like a good idea. I have done practically zero database/asp programming so far, so what's the best way of doing that last step? Should the c# program access the database 'directly' (using what? LINQ?) or setup some sort of Facade (SOAP?) service? If a service should be used, should it be done through Joomla or with ASP on IIS? Thanks

    Read the article

  • Problem deleting .svn directories on Windows XP

    - by John L
    I don't seem to have this problem on my home laptop with Windows XP, but then I don't do much work there. On my work laptop, with Windows XP, I have a problem deleting directories when it has directories that contain .svn directories. When it does eventually work, I have the same issue emptying the Recycle bin. The pop-up window says "Cannot remove folder text-base: The directory is not empty" or prop-base or other folder under .svn This continued to happen after I changed config of TortoiseSVN to stop the TSVN cache process from running and after a reboot of the system. Multiple tries will eventually get it done. But it is a huge annoyance because there are other issues I'm trying to fix, so I'm hoping it is related. 'Connected Backup PC' also runs on the laptop and the real problem is that cygwin commands don't always work. So I keep thinking the dot files and dot directories have something to do with both problems and/or the backup or other process scanning the directories is doing it. But I've run out of ideas of what to try or how to identify the problem further.

    Read the article

  • Extremely Difficult Problem with ASP.Net 4.0 WebForms app using Routing

    - by dudeNumber4
    I have a completed app running in a QA environment. Everything works fine under most circumstances. If you hit a plain URL (no identifying information in the URL), you see an intro page with a button (generated by an asp LinkButton control) that posts back and directs you to another page. The markup looks the same when it fails and when it doesn't. When such a URL is followed from, e.g., Word and the default browser is IE, the intro page loads fine, but clicking the button causes an error. When not debugging, this behavior occurs every time. While debugging, the error occurs only ~ 1 in 10 times (closing the browser instance and starting over every time). When the error occurs, the intro page Page_Load fires and IsPostBack is false. Somehow, instead of a post, a get is being issued. When I run fiddler to try to analyze the actual calls (can't use firebug because it never happens using Firefox), everything works every time. I don't know whether this issue has anything to do with routing, and I've no idea even what to look at next. The strange thing is, when I debug, the intro page doesn't fully load every time. Only about 1 in 3 times does it fully load even if I've just cleared browser cache. When I run it through fiddler, it fully loads and works fine every time.

    Read the article

  • local vs core contoller

    - by latvian
    Hi, I am adding new column and action in the local admin app/code/local/Mage/Adminhtml/Block/Catalog/Product/Grid.php which works fine, however. The local controller/app/code/local/Mage/Adminhtml/Block/Catalog/Product.php is not being used or is not overloading the admin one /app/code/core/Mage/Adminhtml/Block/Catalog/Product.php. This is almost fresh install of Magento 1.4.0.1. I am the only one working, so i know it is not overloaded by some custom controller. I have disabled all custom modules. I have rolled back most of my changes. I have checked /etc/Modules/Mage_Catalog.xml. Refreshed cache all possible ways, loged in and out. Nothing....still using the core contoller copy. why? How do you troubleshoot, meaning, at what moment magento decides using between core or local copies? ...its even more strange because it does not parse local Adminhtml config.xml but uses local Adminthml copy of Blocks. Any pointer would help. I would like to keep everything in local code. Thank You, Margots

    Read the article

  • Why is my long polling code for a notification system not updating in real time? PHP MYSQL

    - by tjones
    I am making a notification system similar to the red notification on facebook. It should update the number of messages sent to a user in real time. When the message MYSQL table is updated, it should instantly notify the user, but it does not. There does not seem to be an error inserting into MYSQL because on page refresh the notifications update just fine. I am essentially using code from this video tutorial: http://www.screenr.com/SNH (which updates in realtime if a data.txt file is changed, but it is not written for MYSQL like I am trying to do) Is there something wrong with the below code: **Javascript** <script type="text/javascript"> $(document).ready(function(){ var timestamp = null; function waitForMsg(){ $.ajax({ type: "GET", url: "getData.php", data: "userid=" + userid, async: true, cache: false, success: function(data){ var json = eval('(' + data + ')'); if (json['msg'] != "") { $('.notification').fadeIn().html(json['msg']); } setTimeout('waitForMsg()',30000); }, error: function(XMLHttpRequest, textStatus, errorThrown){ setTimeout('waitForMsg()',30000); } }); } waitForMsg(); </script> <body> <div class="notification"></div> **PHP*** <?php if ($_SERVER['REQUEST_METHOD'] == 'GET' ) { $userid = $_GET['userid']; include("config.php"); $sql="SELECT MAX(time) FROM notification WHERE userid='$userid'"; $result = mysql_query($sql); $row = mysql_fetch_assoc($result); $currentmodif = $row['MAX(time)']; $s="SELECT MAX(lasttimeread) FROM notificationsRead WHERE submittedby='$userid'"; $r = mysql_query($s); $rows = mysql_fetch_assoc($r); $lasttimeread = $rows['MAX(lasttimeread)']; while ($currentmodif <= $lasttimeread) { usleep(10000); clearstatcache(); $currentmodif = $row['MAX(time)']; } $response = array(); $response['msg'] = You have new messages; echo json_encode($response); } ?>

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • Strange numbers in java socket output

    - by user293163
    I have small test app: Socket socket = new Socket("jeck.ru", 80); PrintWriter pw = new PrintWriter(socket.getOutputStream(), false); pw.println("GET /ip/ HTTP/1.1"); pw.println("Host: jeck.ru"); pw.println(); pw.flush(); BufferedReader rd = new BufferedReader(new InputStreamReader(socket.getInputStream())); String str; while ((str = rd.readLine()) != null) { System.out.println(str); } It`s output: HTTP/1.1 200 OK Date: Sat, 13 Mar 2010 22:06:51 GMT Content-Type: text/html;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Keep-Alive: timeout=5 Server HTTP/1.1 200 OK Date: Sat, 13 Mar 2010 22:06:51 GMT Content-Type: text/html;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Keep-Alive: timeout=5 Server: Apache Cache-Control: max-age=0 Expires: Sat, 13 Mar 2010 22:06:51 GMT 123 <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>??? IP</title> </head> <body> <div style='text-align: center; font: 32pt Verdana;margin-top: 300px'> ??? IP &#151; 94.103.87.153 </div> </body> </html> 0 Whence these numbers (123 an 0) takes?

    Read the article

  • drupal hook_menu_alter() for adding tabs

    - by EricP
    I want to add some tabs in the "node/%/edit" page from my module called "cssswitch". When I click "Rebuild Menus", the two new tabs are displayed, but they are displayed for ALL nodes when editing them, not just for the node "cssswitch". I want these new tabs to be displayed only when editing node of type "cssswitch". The other problem is when I clear all cache, the tabs completely dissapear from all edit pages. Below is the code I wrote. function cssswitch_menu_alter(&$items) { $node = menu_get_object(); //print_r($node); //echo $node->type; //exit(); if ($node->type == 'cssswitch') { $items['node/%/edit/schedulenew'] = array( 'title' => 'Schedule1', 'access callback'=>'user_access', 'access arguments'=>array('view cssswitch'), 'page callback' => 'cssswitch_schedule', 'page arguments' => array(1), 'type' => MENU_LOCAL_TASK, 'weight'=>4, ); $items['node/%/edit/schedulenew2'] = array( 'title' => 'Schedule2', 'access callback'=>'user_access', 'access arguments'=>array('view cssswitch'), 'page callback' => 'cssswitch_test2', 'page arguments' => array(1), 'type' => MENU_LOCAL_TASK, 'weight'=>3, ); } } function cssswitch_test(){ return 'test'; } function cssswitch_test2(){ return 'test2'; } Thanks for any help.

    Read the article

  • Hibernate JPA Caching Problem, Please help!

    - by Sameer Malhotra
    Ok, Here is my problem. I have a table named Master_Info_tbl. Its a lookup table: Here is the code for the table: @Entity @Table(name="MASTER_INFO_T") public class CodeValue implements java.io.Serializable { private static final long serialVersionUID = -3732397626260983394L; private Integer objectid; private String codetype; private String code; private String shortdesc; private String longdesc; private Integer dptid; private Integer sequen; private Timestamp begindate; private Timestamp enddate; private String username; private Timestamp rowlastchange; //getter Setter methods I have a service layer which calls the method       service.findbycodeType("Code1");   same way this table is queried for the other code types as well e.g. code2, code3 and so on till code10 which gets the result set from the same table and is shown into the drop down of the jsp pages since these drop downs are in 90% of the pages I am thinking to cache them globally. Any idea how to achieve this? FYI: I am using JPA and Hibernate with Struts2 and Spring. The database being used is DB2 UDB8.2 Please help!

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • When clicking on ajax.actionlink in its oncomplete function I can't update the html of a div whith requested data

    - by Milka Salkova
    In one partial view I've got some ajax.ActionLinks which when clicked update the div 'importpartupdate' (they just updates the div whith new ajax.actionLinks with other routevalues). The problem is that when this update is competed I have to update another div - depending on which link is clicked . That's why in my OnComplete function of my ajax.ActionLink I make an ajax request to the action'GridViewLanguage' which returns me the partial view which should uodate this other div which claass .floatLanguage. So the first time when I click a link eeverything works correctly and my two divs are correctly updated. But the second time I click a new link it seems the the floatlanguuage div is not updated like somehow the browser is caching the previous info I don't know. \i tried with cache:false- nothing worked. @model MvcBeaWeb.GroupMenu <nav class="sidebar-nav"> <div class="divLeftShowMenu"> <ul> @{ if (Model != null) { foreach (MvcBeaDAL.WebServiceBeaMenu item in Model.MenuLeft) { <li> @Ajax.ActionLink(@item.SpecialWord, "ImportShow", new { id = Model.LanguageName, menuID = item.ID, articlegroupID = item.ArticlegroupID, counter = 1 }, new AjaxOptions { UpdateTargetId = "importPartUpdate", HttpMethod = "GET", InsertionMode = InsertionMode.Replace, OnComplete = "success("[email protected]+")" }, new { id=item.ID}) </li> } } } </ul> </div> </nav> <script> function success(ids) { var nocache = new Date().getTime(); jQuery.ajax({ url: '@Url.Action("GridLanguageView")/?menuID='+ids }).done(function (data) { $(".floatLanguage").replaceWith(data); alert(data); }); } </script>

    Read the article

  • Reason for socket.error

    - by August Flanagan
    Hi, I am a complete newbie when it comes to python, and programming in general. I've been working on a little webapp for the past few weeks trying to improve my coding chops. A few days ago my laptop was stolen so I went out and got a new MacBook Pro. Thank God I had everything under subversion control. The problem is now that I am on my new machine a script that I was running has stopped working and I have no idea why. This is really the only part of what I have been writing that I borrowed heavily for existing scripts. It is from the widely available whois.py script and I have only slightly modified it as follows (see below). It was running fine on my old system (running ubuntu), but now the socket.error is being raised. I'm completely lost on this, and would really appreciate any help. Thanks! def is_available(domainname, whoisserver="whois.verisign-grs.com", cache=0): if whoisserver is None: whoisserver = "whois.networksolutions.com" s = None while s == None: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setblocking(0) try: s.connect((whoisserver, 43)) except socket.error, (ecode, reason): if ecode in (115, 150): pass else: raise socket.error, (ecode, reason) ret = select.select([s], [s], [], 30) if len(ret[1])== 0 and len(ret[0]) == 0: s.close() raise TimedOut, "on connect " s.setblocking(1) except socket.error, (ecode, reason): print ecode, reason time.sleep(1) s = None s.send("%s \n\n" % domainname) page = "" while 1: data = s.recv(8196) if not data: break page = page + data s.close()

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >