Search Results

Search found 10602 results on 425 pages for 'media queries'.

Page 396/425 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Alternative to Page_Load in ASP.NET (and a good WTF story)

    - by Jason
    Woo, I have a doozy of a problem (might even be one for the Daily WTF) and I'm hoping there's a solution. (My apologies for the long post...) I have been working on a website that I inherited about a month ago. One of the parts I have been working on is fixing one of the controls (essentially a dynamic header bar) so that it displays additional information as requested by my users. As part of doing this project, I created a Site.master file so that I wouldn't have to recode the header bar into every single page. When I first started doing this, it seemingly worked very well. All the pages I had developed looked great and the bar updated as it should displaying the information as it should. Well, when I dropped the Site.master (and this control) into older site pages (ones I did not specifically develop) I noticed that it looked bad on some of them, but not all of them. When I say it looked bad, basically, the control would left-align itself to the page rather than center as it should. I spent a couple hours debugging to no avail - CSS looked correct, the HTML appeared to be okay, I didn't see anything in the Javascript (although, I did miss something as I'll point out in a second), and even the old code looked correct (to the best that it could - it's not very well written). Another coworker took a look at the site and couldn't find anything at first, either. It wasn't until I just thought to look at the rendered source code of the page (I had been working in the developer view up to this point in IE8) that it became clear what was wrong. The original developer performs searches on many of the pages. To accomplish this, he queries the database for ALL the data and then loads them into Javascript arrays within the page so he can get access to them. This in itself is a huge problem because we're talking about thousands of items, and it obviously isn't scalable (and, yes, the site is slow). However, it finally clicked what was screwing up the Site.master - when he loads the data into the Javascript arrays, he writes out the data to the HTML upon Page_Load using numerous Response.Write(string) calls. The WTF (and what was messing me up) is that he inserts the Javascript before the DOCTYPE causing IE to go into quirks mode! So, because I need to at least get this release out (I'll fix the real problem later), I was wondering: is there a way I can force this Javascript to be inserted elsewhere into the HTML—after the DOCTYPE at the very least? Right now, all the Response.Write() calls are being done in the Page_Load method. I just need them to be inserted later.

    Read the article

  • Rails Tutorial Chapter 4 2nd Ed. Title Helper not being called with out argument.

    - by SuddenlyAwakened
    I am running through the Rails Tutorial by Michael Hartl (Screen Cast). Ran into the and issue in chapter 4 on the title helper. I have been putting my own twist on the code as I go to make sure I understand it all. However on this one I it is very similar and I am not quite sure why it is acting the way it is. Here is the code: Application.html.erb <!DOCTYPE html> <html> <head> <%- Rails.logger.info("Testing: #{yield(:title)}") %> <title><%= full_title(yield(:title)) %></title> <%= stylesheet_link_tag "application", :media => "all" %> <%= javascript_include_tag "application" %> <%= csrf_meta_tags %> </head> <body> <%= yield %> </body> </html> Application_helper.rb module ApplicationHelper def full_title(page_title) full_title = "Ruby on Rails Tutorial App" full_title += " | #{page_title}" unless page_title.blank? end end Home.html.erb <h1><%= t(:sample_app) %></h1> <p> This is the home page for the <a href="http://railstutorial.org/">Ruby on Rails Tutorial</a> sample application </p> about.html.erb <% provide(:title, t(:about_us)) %> <h1><%= t(:about_us) %></h1> <p> The <a href="http://railstutorial.org/">Ruby on Rails Tutorial</a> is a project to make a book and screencast to teach web development with <a href="http://railstutorial.org/">Ruby on Rails</a>. This is the sample application for the tutorial. </p> What Happens: The code works fine when I set the provide method like on the about page. However when I do not it does not seem to even call the helper. I am assuming that because no title is passed back. Any ideas on what I am doing wrong? Thank you all for your help.

    Read the article

  • for loop with count from array, limit output? PHP

    - by Philip
    print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; $news_comments is a 3 diemensional array from mysqli_fetch_assoc returned from a function elsewhere, for some reason my for loop returns the total of the array sets such as [0][2] etc until it reaches the max amount from the counted $news_comments var which is a return function of LIMIT 10. my problem is if I add any text/html/icons inside the for loop it prints it in this case 11 times even though only array sets 1 and 2 have data inside them. How do I get around this? My function query is as follows: function news_comments() { require_once '../data/queries.php'; // get newsID from the url $urlID = $_GET['news_id']; // run our query for newsID information $news_comments = selectQuery('*', 'news_comments', 'WHERE news_id='.$urlID.'', 'ORDER BY comment_date', 'DESC', '10'); // requires 6 params // check query for results if(!$news_comments) { // loop error session and initiate var foreach($_SESSION['errors'] as $error=>$err) { print htmlentities($err) . 'for News Comments, be the first to leave a comment!'; } } else { print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; } }// End function Any help is greatly appreciated.

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • embed a jquery script after jquery is loaded by widget

    - by matthew k
    http://stackoverflow.com/a/6065421 was helpful to see how to confirm jquery has been loaded. my widget will need a class that was written using jquery. may i have some assistance on embedding this other class built using jquery? thank you, below is the snippet from the above link with my code added in the final portion as noted in the code comments: (function(window, document, version, callback) { var j, d; var loaded = false; if (!(j = window.jQuery) || version > j.fn.jquery || callback(j, loaded)) { var script = document.createElement("script"); script.type = "text/javascript"; script.src = "/media/jquery.js"; script.onload = script.onreadystatechange = function() { if (!loaded && (!(d = this.readyState) || d == "loaded" || d == "complete")) { callback((j = window.jQuery).noConflict(1), loaded = true); j(script).remove(); } }; document.documentElement.childNodes[0].appendChild(script) } })(window, document, "1.3", function($, jquery_loaded) { //my code added below var script_tag = document.createElement('script'); script_tag.setAttribute("type","text/javascript"); script_tag.setAttribute("src", "http://mysite.com/widget/slides.jquery.js"); (document.getElementsByTagName("head")[0] || document.documentElement).appendChild(script_tag); $('#slides').slides({}); //this line gives an error. }); right now, i am trying the following based on the response(s) provided to this question (line that throws error is noted with a comment): //this function is called after jquery being embedded has been confirmed. {mysite} placeholder is nonexistent in actual code. function main() { jQuery(document).ready(function($) { var css_link = $("<link>", { rel: "stylesheet", type: "text/css", href: "http://mysite/widget/widget.css" }); css_link.appendTo('head'); $('#crf_widget').after('<div id="crf_widget_container"></div>'); /******* Load HTML *******/ var jsonp_url = "http://mysite/widget.php?callback=?"; $.getJSON(jsonp_url, function(data) { $('#crf_widget_container').html(data); $('#category_sel').change(function(){ alert(this.value); }); $.getScript("http://mysite/widget/slides.jquery.js", function(data, textStatus, jqxhr) { alert(1); //fires ok $('#slides').slides({}); //errors }); }); }); }

    Read the article

  • SQL Server 2008: If Multiple Values Set In Other Mutliple Values Set

    - by AJH
    In SQL, is there anyway to accomplish something like this? This is based off a report built in SQL Server Report Builder, where the user can specify multiple text values as a single report parameter. The query for the report grabs all of the values the user selected and stores them in a single variable. I need a way for the query to return only records that have associations to EVERY value the user specified. -- Assume there's a table of Elements with thousands of entries. -- Now we declare a list of properties for those Elements to be associated with. create table #masterTable ( ElementId int, Text varchar(10) ) insert into #masterTable (ElementId, Text) values (1, 'Red'); insert into #masterTable (ElementId, Text) values (1, 'Coarse'); insert into #masterTable (ElementId, Text) values (1, 'Dense'); insert into #masterTable (ElementId, Text) values (2, 'Red'); insert into #masterTable (ElementId, Text) values (2, 'Smooth'); insert into #masterTable (ElementId, Text) values (2, 'Hollow'); -- Element 1 is Red, Coarse, and Dense. Element 2 is Red, Smooth, and Hollow. -- The real table is actually much much larger than this; this is just an example. -- This is me trying to replicate how SQL Server Report Builder treats -- report parameters in its queries. The user selects one, some, all, -- or no properties from a list. The written query treats the user's -- selections as a single variable called @Properties. -- Example scenario 1: User only wants to see Elements that are BOTH Red and Dense. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Dense in (select Text from #masterTable where ElementId = e.Id) --ideally a set containing only Red, Coarse, and Dense --Both Red and Dense are within Element 1's properties (Red, Coarse, Dense), so Element 1 gets returned, but not Element 2. -- Example scenario 2: User only wants to see Elements that are BOTH Red and Hollow. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Hollow in (select Text from #masterTable where ElementId = e.Id) --Both Red and Hollow are within Element 2's properties (Red, Smooth, Hollow), so Element 2 gets returned, but not Element 1. --Example Scenario 3: User only picked the Red option. select e.* from Elements e where (@Properties) --ideally a set containing only Red in (select Text from #masterTable where ElementId = e.Id) --Red is within both Element 1 and Element 2's properties, so both Element 1 and Element 2 get returned. The above syntax doesn't actually work because SQL doesn't seem to allow multiple values on the left side of the "in" comparison. Error that returns: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Am I even on the right track here? Sorry if the example looks long-winded or confusing.

    Read the article

  • .NET: Is there a way to finagle a default namespace in an XPath 1.0 query?

    - by Cheeso
    I'm building a tool that performs xpath 1.0 queries on XHTML documents. The requirement to use a namespace prefix in the query is killing me. The query looks like this: html/body/div[@class='contents']/div[@class='body']/ div[@class='pgdbbyauthor']/h2[a[@name][starts-with(.,'Quick')]]/ following-sibling::ul[1]/li/a (all on one line) ...which is bad enough, except because it's xpath 1.0, I need to use an explicit namespace prefix on each QName, so it looks like this: ns1:html/ns1:body/ns1:div[@class='contents']/ns1:div[@class='body']/ ns1:div[@class='pgdbbyauthor']/ns1:h2[ns1:a[@name][starts-with(.,'Quick')]]/ following-sibling::ns1:ul[1]/ns1:li/ns1:a To set up the query, I do something like this: var xpathDoc = new XPathDocument(new StringReader(theText)); var nav = xpathDoc.CreateNavigator(); var xmlns = new XmlNamespaceManager(nav.NameTable); foreach (string prefix in xmlNamespaces.Keys) xmlns.AddNamespace(prefix, xmlNamespaces[prefix]); XPathNodeIterator selection = nav.Select(xpathExpression, xmlns); But what I want is for the xpathExpression to use the implicit default namespace. Is there a way for me to transform the unadorned xpath expression, after it's been written, to inject a namespace prefix for each element name in the query? I'm thinking, anything between two slashes, I could inject a prefix there. Excepting of course axis names like "parent::" and "preceding-sibling::" . And wildcards. That's what I mean by "finagle a default namespace". Is this hack gonna work? Addendum Here's what I mean. suppose I have an xpath expression, and before passing it to nav.Select(), I transform it. Something like this: string FixupWithDefaultNamespace(string expr) { string s = expr; s = Regex.Replace(s, "^(?!::)([^/:]+)(?=/)", "ns1:$1"); // beginning s = Regex.Replace(s, "/([^/:]+)(?=/)", "/ns1:$1"); // stanza s = Regex.Replace(s, "::([A-Za-z][^/:*]*)(?=/)", "::ns1:$1"); // axis specifier s = Regex.Replace(s, "\\[([A-Za-z][^/:*\\(]*)(?=[\\[\\]])", "[ns1:$1"); // predicate s = Regex.Replace(s, "/([A-Za-z][^/:]*)(?!<::)$", "/ns1:$1"); // end s = Regex.Replace(s, "^([A-Za-z][^/:]*)$", "ns1:$1"); // edge case s = Regex.Replace(s, "([-A-Za-z]+)\\(([^/:\\.,\\)]+)(?=[,\\)])", "$1(ns1:$2"); // xpath functions return s; } This actually works for simple cases I tried. To use the example from above - if the input is the first xpath expression, the output I get is the 2nd one, with all the ns1 prefixes. The real question is, is it hopeless to expect this Regex.Replace approach to work, as the xpath expressions get more complicated?

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

  • PHP - JSON Steam API query

    - by Hunter
    First time using "JSON" and I've just been working away at my dissertation and I'm integrating a few features from the steam API.. now I'm a little bit confused as to how to create arrays. function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530'); $test = decode_url($api); var_dump($test['response']['players'][0]['personaname']['steamid']); } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $data = file_get_contents($url); $data_output = json_decode($data, true); return $data_output; } So ea I've wrote a simple method to decode Json as I'll be doing a fair bit.. But just wondering the best way to print out arrays.. I can't for the life of me get it to print more than 1 element without it retunring an error e.g. Warning: Illegal string offset 'steamid' in /opt/lampp/htdocs/lan/lan-includes/scripts/class.steam.php on line 48 string(1) "R" So I can print one element, and if I add another it returns errors. EDIT -- Thanks for help, So this was my solution: function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530,76561197960435530'); $data = decode_url($api); foreach($data ['response']['players'] as $player) { echo "Steam id:" . $player['steamid'] . "\n"; echo "Community visibility :" . $player['communityvisibilitystate'] . "\n"; echo "Player profile" . $player['profileurl'] ."\n"; } } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $json = file_get_contents($decodeURL); $data_output = json_decode($json, true); return $data_output; } Worked this out by taking a look at the data.. and a couple json examples, this returns an array based on the Steam API URL (It works for multiple queries.... just FYI) and you can insert loops inside for items etc.. (if anyone searches for this).

    Read the article

  • Approach for authentication and storing user details.

    - by cappuccino
    Hey folks, I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL). Currently this is my approach to authentication: 1) User signs in, the details are checked in database. - Standard stuff really. 2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem. I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this. I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on. 3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance). So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session... 4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token. So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task? Thanks.

    Read the article

  • Implementing a scalable and high-performing web app

    - by Christopher McCann
    I have asked a few questions on here before about various things relating to this but this is more of a consolidation question as I would like to check I have got the gist of everything. I am in the middle of developing a social media web app and although I have a lot of experience coding in Java and in PHP I am trying things a bit different this time. I have modularised each component of the application. So for example one component of the application allows users to private message each other and I have split this off into its own private messaging service. I have also created a user data service the purpose of which is to return data about the user for example their name, address, age etc etc from the database. Their is also another service, the friends service, which will work off the neo4j database to create a social graph. My reason for doing all this is to allow me up to update seperate modules when I need to - so while they mostly all run off MySQL right now I could move one to Cassandra later if I thought it approriate. The actual code of the web app is really just used for the final construction. The modules behind it dont really follow any strict REST or SOAP protocol. Basically each method on our API is turned into a PHP procedural script. This then may make calls to other back-end code which tends to be OO. The web app makes CURL requests to these pages and POSTs data to them or GETs data from them. These pages then return JSON where data is required. I'm still a little mixed up about how I actually identify which user is logged in at that moment. Do I just use sessions for that? Like if we called the get-messages.php script which equates to the getMessages() method for that user - returning all the private messages for that user - how would the back-end code know which user it is as posting the users ID to the script would not be secure. Anyone could do that and get all the messages. So I thought I would use sessions for it. Am I correct on this? Can anyone spot any other problems with what I am doing here? Thanks

    Read the article

  • How to use Facebook graph API to retrieve fan photos uploaded to wall of fan page?

    - by Joe
    I am creating an external photo gallery using PHP and the Facebook graph API. It pulls thumbnails as well as the large image from albums on our Facebook Fan Page. Everything works perfect, except I'm only able to retrieve photos that an ADMIN posts to our page. (graph.facebook.com/myalbumid/photos) Is there a way to use graph api to load publicy uploaded photos from fans? I want to retrieve the pictures from the "Photos from" album, but trying to get the ID for the graph query is not like other albums... it looks like this: http://www.facebook.com/media/set/?set=o.116860675007039 Another note: The only way i've come close to retreiving this data is by using the "feed" option.. ie: graph.facebook.com/pageid/feed EDIT: This is about as far as I could get- it works, but has certain issues stated below. Maybe someone could expand on this, or provide a better solution. (Using FB PHP SDK) <?php require_once ('config.php'); // get all photos for album $photos = $facebook->api("/YourID/tagged"); $maxitem =10; $count = 0; foreach($photos['data'] as $photo) { if ($photo['type'] == "photo"): echo "<img src='{$photo['picture']}' />", "<br />"; endif; $count+= 1; if ($count >= "$maxitem") break; } ?> Issues with this: 1) The fact that I don't know a method for graph querying specific "types" of Tags, I had to run a conditional statement to display photos. 2) You cannot effectively use the "?limit=#" with this, because as I said the "tagged" query contains all types (photo, video, and status). So if you are going for a photo gallery and wish to avoid running an entire query by using ?limit, you will lose images. 3) The only content that shows up in the "tagged" query is from people that are not Admins of the page. This isn't the end of the world, but I don't understand why Facebook wouldn't allow yourself to be shown in this data as long as you posted it "as yourself" and not as the page.

    Read the article

  • Problem Fetching JSON Result with jQuery in Firefox and Chrome (IE8 Works)

    - by senfo
    I'm attempting to parse JSON using jQuery and I'm running into issues. Using the code below, the data keeps coming back null: <!DOCTYPE html> <html> <head> <title>JSON Test</title> </head> <body> <div id="msg"></div> <script src="http://code.jquery.com/jquery-latest.js"></script> <script> $.ajax({ url: 'http://datawarehouse.hrsa.gov/ReleaseTest/HGDWDataWebService/HGDWDataService.aspx?service=HC&zip=20002&radius=10&filter=8357&format=JSON', type: 'GET', dataType: 'json', success: function(data) { $('#msg').html(data[0].title); // Always null in Firefox/Chrome. Works in IE8. }, error: function(data) { alert(data); } }); </script> </body> </html> The JSON results look like the following: {"title":"HEALTHPOINT TYEE CAMPUS","link":"http://www.healthpointchc.org","id":"tag:datawarehouse.hrsa.gov,2010-04-29:/8357","org":"HEALTHPOINT TYEE CAMPUS","address":{"street-address":"4424 S. 188TH St.","locality":"Seatac","region":"Washington","postal-code":"98188-5028"},"tel":"206-444-7746","category":"Service Delivery Site","location":"47.4344818181818 -122.277672727273","update":"2010-04-28T00:00:00-05:00"} If I replace my URL with the Flickr API URL (http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?), I get back a valid JSON result that I am able to make use of. I have successfully validated my JSON at JSONLint, so I've run out of ideas as to what I might be doing wrong. Any thoughts? Update: I had the client switch the content type to application/json. Unfortunately, I'm still experiencing the exact same problem. I also updated my HTML and included the live URL I've been working with. Update 2: I just gave this a try in IE8 and it works fine. For some reason, it doesn't work in either Firefox 3.6.3 or Chrome 4.1.249.1064 (45376). I did notice a mistake with the data being returned (the developer is returning a collection of data, even for queries that will always return a single record), but it still baffles me why it doesn't work in other browsers. It might be important to note that I am working from an HTML file on my local file system. I thought it might be a XSS issue, but that doesn't explain why Flickr works.

    Read the article

  • How to Load Dependent Files on Demand + Check if They're Loaded or Not?

    - by br4inwash3r
    I'm trying to implement an assets/dependency loader that i've found from an old article at 24Ways.org. most of you might be familiar with it. it's from this article by Christian Heilmann: http://24ways.org/2007/keeping-javascript-dependencies-at-bay i've modified the script to load CSS files as well. and it's now quite close to what i want. but i still need to do some checking to see wether an asset have been completely loaded or not. just wondering if you guys have any ideas :) here's what my script currently looked like: var assetLoader = { assets: { products: { js: 'products.js', css: 'products.css', loaded: false }, articles: { js: 'articles.js', css: 'articles.css', loaded: false }, [...] cycle: { js: 'jquery.cycle.min.js', loaded: false }, swfobject: { js: 'jquery.swfobject.min.js', loaded: false } }, add: function(asset) { var comp = assetLoader.assets[asset]; var path = '/path/to/assets/'; if (comp && comp.loaded == false) { if (comp.js) { // load js var js = document.createElement('script'); js.src = path + 'js/' + comp.js; js.type = 'text/javascript'; js.charset = 'utf-8'; // append to document document.getElementsByTagName('body')[0].appendChild(js); } if (comp.css) { // load css var css = document.createElement('link'); css.rel = 'stylesheet'; css.href = path + 'css/' + comp.css; css.type = 'text/css'; css.media = 'screen, projection'; css.charset = 'utf-8'; // append to document document.getElementsByTagName('head')[0].appendChild(css); } } }, check: function(asset) { assetLoader.assets[asset].loaded = true; } } Christian explains this method in his article in great detail. I don't want to confuse you guys anymore with my bad english :P and here's an example of how i run the script: ... // load jquery cycle plugin if (page=='tvc' || page=='products') { if (!assetLoader.assets.cycle.loaded) { assetLoader.add('cycle'); } } // load products page assets if (!assetLoader.assets.products.loaded) { assetLoader.add('products'); } ... this kind of approach is very problematic though. coz assets loads asynchronously, which means some of the code inside products.js that depends on jquery.cycle.js might continue running before jquery.cycle.js is even loaded resulting in errors. while i'm quite aware that scripts can be attached with an onload event, i'm just not really sure how to implement it to my script. anyone care to help me? please... :P

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • How to combine a list of choices to determine which select statement

    - by Larry
    I have a mysql db and am using php 5.2 What I am trying to do is offer a list of options for a person to select (only 1). The chosen option will cause a select, update, or delete statement to be ran. The results of the statement do not need to be shown, although, showing the old and then the new would be nice (no problems with that part tho'.). Pseudo-Code: Assign $choice = 0 Check the value of $choice // This way, if it = 100, we do a break Select a Choice:<br> 1. Adjust Status Value (+60) // $choice = 1<br> 2. Show all Ships <br> // $choice = 2 3. Show Ships in Port <br> // $choice = 3 ... 0. $choice="100" // if the value =100, quit this part Use either case (switch) or if/else statements to run the users choice1 If the choice is 1, then run the "Select" statement with the variable of $sql1 -- "SELECT .... If the choice is 2, then run the "Select" statement with the variable of $sql2 --- SELECT * FROM Ships If the choice is 3, then run the "Select" statement with the variable of $sql3 <br> .... If the choice is 0, then we are done. I figured the (3) statements would be assigned in php as: $sql1="...". $sql2="SELECT * FROM Ships" $sql3="SELECT * FROM Ships WHERE nPort="1" My idea was to use the switch statement, but got lost on it. :( I would like the options to be available over and over again, until a variable ($choice) is selected. In which case, this particular page is done and goes back to the "Main Menu"? The coding and display, if I use it, I can do. Just not sure how to write the way to select which one I want. It is possible that I would run all of the queries, and other times, only one, so that is why I would like the choice. An area I get confused in is the proper forms to use such as -- ' ' " " and ...?? Not sure the # of options I will end up with, but it will be more than 5 but less than 20 / page. So if I get the system down for 2-3 choices, I can replicate it for as many as I may need. And, as always, if a better way exists, I am willing to try it. Thanks again... Larry

    Read the article

  • .live event doesnt work till second click

    - by ChampionChris
    I have 2 list on a page that are linked. When I drag a li element from list 1 to list 2 the live events on list 1 don't work on the first click only second click. Below is the code that adds the li (obj) to list 2. function AddToDropBox(obj) { $(obj).children(".handle").animate({ width: "20px" }).children("strong").fadeOut(); $(obj).children("span:not(.track,.play,.handle,:has(.btn-edit))").fadeOut('fast'); $(obj).children(".play").css("margin-right", "8px"); $(obj).css({ "opacity": "0.0", "width": "284px" }).animate({ opacity: "1.0" }); if ($(".sidebar-drop-box ul").children(".admin-song").length > 0) { $(".dropTitle").fadeOut("fast"); $(".sidebar-drop-box ul.admin-song-list").css("min-height", "0"); } if (typeof SetLinks == 'function') { SetLinks(); } //CBG Changes adds media ID to hidden field //checks id there is a value in field then adds comma if(document.getElementById("ctl00_cphBody_hfRemoveMedia").value==""||document.getElementById("ctl00_cphBody_hfRemoveMedia").value==null) { document.getElementById("ctl00_cphBody_hfRemoveMedia").value=(obj).attr("mediaid"); } else { var localMediaIDs=document.getElementById("ctl00_cphBody_hfRemoveMedia").value; document.getElementById("ctl00_cphBody_hfRemoveMedia").value=localMediaIDs+", "+(obj).attr("mediaid"); } // alert("hfid: "+document.getElementById("ctl00_cphBody_hfRemoveMedia").value); //END CBG Modifications } this is one of the live() events that dont fire until the second click after the drag. This live() event is in a document.ready function(). // Live for deleting. $(".btn-del").live("click", function(e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; }); EDIT @dreaton.. Im new to jquery and javascript so thanks for the last tip... Im not sure what you mean about cache the query's. ... the delegete feature is giving me this Microsoft JScript runtime error: Object doesn't support this property or method this is the way I have the code $('#ulPlaylist').delegate('.btn-del', 'click', function (e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; });

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • Hibernate Communications Link Failure in Restlet-Hibernate Based Java application powered by MySQL

    - by Vatsala
    Let me describe my question - I have a Java application - Hibernate as the DB interfacing layer over MySQL. I get the communications link failure error in my application. The occurence of this error is a very specific case. I get this error , When I leave mysql server unattended for more than approximately 6 hours (i.e. when there are no queries issued to MySQL for more than approximately 6 hours). I am pasting a top 'exception' level description below, and adding a pastebin link for a detailed stacktrace description. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: java.net.ConnectException: Connection refused: connect the link to the pastebin for further investigation - http://pastebin.com/4KujAmgD What I understand from these exception statements is that MySQL is refusing to take in any connections after a period of idle/nil activity. I have been reading up a bit about this via google search, and came to know that one of the possible ways to overcome this is to set values for c3p0 properties as c3p0 comes bundled with Hibernate. Specifically, I read from here http://www.mchange.com/projects/c3p0/index.html that setting two properties idleConnectionTestPeriod and preferredTestQuery will solve this for me. But these values dont seem to have had an effect. Is this the correct approach to fixing this? If not, what is the right way to get over this? The following are related Communications Link Failure questions at stackoverflow.com, but I've not found a satisfactory answer in their answers. http://stackoverflow.com/questions/2121829/java-db-communications-link-failure http://stackoverflow.com/questions/298988/how-to-handle-communication-link-failure Note 1 - i dont get this error when I am using my application continuosly. Note 2 - I use JPA with Hibernate and hence my hibernate.dialect,etc hibernate properties reside within the persistence.xml in the META-INF folder (does that prevent the c3p0 properties from working?)

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • How do I add a filter button to this pagination?

    - by ClarkSKent
    Hey, I want to add a button(link), that when clicked will filter the pagination results. I'm new to php (and programming in general) and would like to add a button like 'Automotive' and when clicked it updates the 2 mysql queries in my pagination script, seen here: As you can see, the category automotive is hardcoded in, I want it to be dynamic, so when a link is clicked it places whatever the id or class is in the category part of the query. 1: $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); 2: $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); This is the entire current php pagination script that I am using: <?php //connecting to the database $error = "Could not connect to the database"; mysql_connect('localhost','root','root') or die($error); mysql_select_db('ajax_demo') or die($error); //max displayed per page $per_page = 2; //get start variable $start = $_GET['start']; //count records $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); //count max pages $max_pages = $record_count / $per_page; //may come out as decimal if (!$start) $start = 0; //display data $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); while ($row = mysql_fetch_assoc($get)) { // get data $name = $row['id']; $age = $row['site_name']; echo $name." (".$age.")<br />"; } //setup prev and next variables $prev = $start - $per_page; $next = $start + $per_page; //show prev button if (!($start<=0)) echo "<a href='pagi_test.php?start=$prev'>Prev</a> "; //show page numbers //set variable for first page $i=1; for ($x=0;$x<$record_count;$x=$x+$per_page) { if ($start!=$x) echo " <a href='pagi_test.php?start=$x'>$i</a> "; else echo " <a href='pagi_test.php?start=$x'><b>$i</b></a> "; $i++; } //show next button if (!($start>=$record_count-$per_page)) echo " <a href='pagi_test.php?start=$next'>Next</a>"; ?>

    Read the article

  • jQuery UI dialog + WebKit + HTML response with script

    - by Anthony Koval'
    Once again I am faced with a great problem! :) So, here is the stuff: on the client side, I have a link. By clicking on it, jQuery makes a request to the server, gets response as HTML content, then popups UI dialog with that content. Here is the code of the request-function: function preview(){ $.ajax({ url: "/api/builder/", type: "post", //dataType: "html", data: {"script_tpl": $("#widget_code").text(), "widgets": $.toJSON(mwidgets), "widx": "0"}, success: function(data){ //console.log(data) $("#previewArea").dialog({ bgiframe: true, autoOpen: false, height: 600, width: 600, modal: true, buttons: { "Cancel": function() { $(this).dialog('destroy'); } } }); //console.log(data.toString()); $('#previewArea').attr("innerHTML", data.toString()); $("#previewArea").dialog("open"); }, error: function(){ console.log("shit happens"); } }) } The response (data) is: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript">var smakly_widget_sid = 0 ,widgets = [{"cols": "2","rows": "2","div_id": "smakly_widget","wid": "0","smakly_style": "small_image",}, ] </script> <script type="text/javascript" src="/media/js/smak/smakme.js"></script> </head> <body> preview <div id="smakly_widget" style="width:560px;height:550px"> </div> </body> </html> As you see, there is a script to load: smakme.js, somehow it doesn't execute in WebKit-based browsers (I tried in Safari and Chrome), but in Firefox, Internet Explorer and Opera it works as expected! Here is that script: String.prototype.format = function(){ var pattern = /\{\d+\}/g; var args = arguments; return this.replace(pattern, function(capture){ return args[capture.match(/\d+/)]; }); } var turl = "/widget" var widgetCtrl = new(function(){ this.render_widget = function (w, content){ $("#" + w.div_id).append(content); } this.build_widgets = function(){ for (var widx in widgets){ var w = widgets[widx], iurl = '{0}?sid={1}&wid={2}&w={3}&h={4}&referer=http://ya.ru&thrash={5}'.format( turl, smakly_widget_sid, w.wid, w.cols, w.rows, Math.floor(Math.random()*1000).toString()), content = $('<iframe src="{0}" width="100%" height="100%"></iframe>'.format(iurl)); this.render_widget(w, content); } } }) $(document).ready(function(){ widgetCtrl.build_widgets(); }) Is that some security issue, or anything else?

    Read the article

  • Is order of parameters for database Command object really important?

    - by nawfal
    I was debugging a database operation code and I found that proper UPDATE was never happening though the code never failed as such. This is the code: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); But after some tweaking this worked: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); Why is it so important that the parameters in WHERE clause has to be given the last in case of OleDb connection? Having worked with MySQL previously, I could (and usually do) write parameters of WHERE clause first because that's more logical to me. Is parameter order important when querying database in general? Some performance concern or something? Is there a specific order to be maintained in case of other databases like DB2, Sqlite etc? Update: I got rid of ? and included proper names with and without @. The order is really important. In both cases only when WHERE clause parameters was mentioned last, actual update happened. To make matter worse, in complex queries, its hard to know ourselves which order is Access expecting, and in all situations where order is changed, the query doesnt do its intended duty with no warning/error!!

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >