Search Results

Search found 20873 results on 835 pages for 'url fetch'.

Page 160/835 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • How to compare page views/visitors to site totals in Google Analytics?

    - by frntk
    I want to measure the effects in total traffic (views or visitors) of a single URL across a time frame. For example, let's say I published a blog post on July 2012. I want to measure what chunk of the site's total traffic is coming to this particular post on a month-by-month graph from July 2012 to date. Is there a way to do this? Edit: I should clarify that what I'd like to do is to generate a month-to-month graph, similar to the one you can see comparing a metric on a period of time against the previous equivalent period (year vs last year, for example) but comparing a particular URL traffic to the site's total traffic. In other words, I can get to the part where I can see the URL's traffic for the timeframe I selected and it shows how much of the total traffic corresponds to the current URL: But I'd like to add a second line to the graph, representing the site's total pageviews, similiar to this (but this one is showing just a different metric for the same URL): Thanks!

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Problems with Update Manager

    - by user65965
    Whenever I try to update with update manager I get the following errors: W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-updates/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-backports/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-security/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/source/Sources 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E:Some index files failed to download. They have been ignored, or old ones used instead. Any help would be much appreciated, thank you. Thank you very much Eliah. I'm still pretty new to Ubuntu. Here's the output I got from the terminal: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise restricted main commercial multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-updates restricted main commercial multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-security restricted main commercial multiverse universe #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main ## This is a 3rd party script to install and update Oracle Java deb http://www.duinsoft.nl/pkg debs all ## Sun-Java6-JRE deb http://security.ubuntu.com/ubuntu hardy-security main multiverse ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list.save: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.save: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list.distUpgrade: deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps ** /etc/apt/sources.list.d/getdeb.list.save: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.save: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/iefremov-ppa-precise.list: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/iefremov-ppa-precise.list.save: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/jockey.list: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/jockey.list.distUpgrade: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree ** /etc/apt/sources.list.d/jockey.list.save: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list.save: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/precise-partner.list: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/precise-partner.list.save: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list: # deb https://justin-dormandy:[email protected]/commercial-ppa-uploaders/crossover-pro/ubuntu precise main #Added by software-center disabled on upgrade to precise ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: Permission denied ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: Permission denied ** /etc/apt/sources.list.d/screenlets-ppa-precise.list: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/screenlets-ppa-precise.list.save: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list.save: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main

    Read the article

  • How can I find a URL called [link] inside a block of HTML containing other URLs?

    - by DrTwox
    I'm writing a script to rewrite Reddit's RSS feeds. The script needs to find a URL named [link] inside a block of HTML that contains other URLs. The HTML is contained in an XML element called <description>. Here are two examples of the <description> element from I need to parse and the [link] I would need to get. First example: <description>submitted by &lt;a href=&#34;http://www.reddit.com/user/wildlyinaccurate&#34;&gt; wildlyinaccurate &lt;/a&gt; &lt;br/&gt; &lt;a href=&#34;http://wildlyinaccurate.com/a-hackers-guide-to-git&#34;&gt;[link]&lt;/a&gt; &lt;a href="http://www.reddit.com/r/programming/comments/26jvl7/a_hackers_guide_to_git/"&gt;[66 comments]&lt;/a&gt;</description> The [link] is: http://wildlyinaccurate.com/a-hackers-guide-to-git Second example: <description>&lt;!-- SC_OFF --&gt;&lt;div class=&#34;md&#34;&gt;&lt;p&gt;I work a support role at a company where I primarily fix issues our customers our experiencing with our software, which is a browser based application written primarily in javascript. I&amp;#39;ve been doing this for 2 years, but I want to take it to the next level (with the long term goal being that I become proficient enough to call myself a developer). I&amp;#39;ve been reading &amp;quot;Javascript The Definitive Guide&amp;quot; by O&amp;#39;Reilly but I was wondering if any of you more experienced users out there had some tips on taking it to the next level. Should I start incorporating some PHP and Jquery into my learning? Side projects on my spare time? Any good online resources? Etc. &lt;/p&gt; &lt;p&gt;Thanks! &lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; submitted by &lt;a href=&#34;http://www.reddit.com/user/56killa&#34;&gt; 56killa &lt;/a&gt; &lt;br/&gt; &lt;a href=&#34;http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/&#34;&gt;[link]&lt;/a&gt; &lt;a href="http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/"&gt;[4 comments]&lt;/a&gt;</description> The [link] is: http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/

    Read the article

  • codeigniter not being able to get the full param from url?

    - by bnelsonjax
    Im having a weird issue that i cant seem to fix. It's dealing with viewing a company and adding a location to that company. when viewing a company, my url would look like this: domain.com/company/view/415 So clearly 415 is the ID of company, the company shows up correctly on my company view page. Now comes the weird part. when clicking on an "Add Location" link, which would take me to : domain.com/location/add/415 so once again this should be saying Location / Add / 415 (company ID 415) on this page, if i do it will echo 4 (instead of 415...the company id) if the company id is 754, the php echo $data['id'] would echo 7 (instead of 754). So its stripping the last 2 numbers off the Company ID. Here is my controller: public function add($id) { if (isset($_POST["add"])) { $this->Equipment_model->add($id); redirect('company/view/'.$id); } $data['locations'] = $this->Equipment_model->get_locations($id); $data['data'] = $id; $this->load->view('templates/header'); $this->load->view('equipment/add', $data); $this->load->view('templates/footer'); } here is my .htaccess RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteCond $1 !^(index\.php|css|font|img|js|themes) RewriteRule ^(.*)$ index.php/$1 [QSA,L] Because my php/codeigniter experience is limited, maybe my terminology is off, so i created a video and uploaded it to twitch, here is the link if you wanna see what im talking about: http://www.twitch.tv/bnelsonjax/b/420079504 if anyone could help i'd be so grateful, I've been stuck on this for about a week. UPDATE ok now we are getting somehwere, when i change controller to: public function add($id) { if (isset($_POST["add"])) { $this->Equipment_model->add($id); redirect('company/view/'.$id); } $data['locations'] = $this->Equipment_model->get_locations($id); $data['data'] = $id; $data['cid'] = $id; $this->load->view('templates/header'); $this->load->view('equipment/add', $data); $this->load->view('templates/footer'); $this->output->enable_profiler(TRUE); } if i add the following to the view page: <?php echo $data['id']; ?> it echos: 7 this one: <?php echo $cid; ?> it echos 766 (CORRECT ONE) this one: <?php echo $data['cid']; ?> it echos 7 my question then is why if the controller show: $data['data'] = $id; $data['cid'] = $id; does only the one thats $data['cid'] echo correctly?

    Read the article

  • Clickable link in RadGrid column

    - by brainimus
    I have a RadGrid where a column in the grid holds a URL. When a put a value in the column I can see the URL but the URL is not clickable (to go to the URL). How can I make the URL clickable? Here is a rough example of what I'm doing now: DataTable table = new DataTable(); DataRow row = table.Rows[0]; row["URL"] = "http://www.google.com"; grid.DataSource = table; In addition I'd really like to show specific text instead of the URL. Something similar to <a href="http://www.google.com">Link</a> in HTML. Is there anyway to do this?

    Read the article

  • Deeplinking using GWT History Token within a Facebook iFrame Canvas

    - by Stevko
    I would like to deep link directly to a GWT app page within a Facebook iFrame Canvas. The first part is simple using GWT's History token with URLs like: http://www.example.com/MyApp/#page1 which would open page1 within my app. Facebook Apps use an application url like: http://apps.facebook.com/myAppName which frames my Canvas Callback URL http://www.example.com/MyApp/ Is there a way to specify a canvas callback url (or bookmark url) which will take the user to a specific page rather than the index page? Why? you may ask. Besides all the benefits of deep links... I want the "Go To Application" url to take users to an index page w/ marketing material (the canvas callback url) I want the "Bookmark URL" to take (likely returning) users to a login page and bypass downloading the marketing content (and that huge SWF file).

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • How can I quote strings in SASS?

    - by Stavros Korokithakis
    I'm using SASS to generate a @font-face mixin, however this: =remotefont(!name, !url) @font-face font-family = !name src = url(!url + ".eot") src = local(!name), url(!url + ".ttf") format("truetype") +remotefont("My font", "/myfont.ttf") becomes this: @font-face { font-family: My font; src: url(/myfont.ttf.eot); src: local(My font), url(/myfont.ttf.ttf) format(truetype); } No matter how much I try, I can't have it quote either "My font" (with "!name") or "truetype" (it removes the quotes). Any ideas on how I can do this?

    Read the article

  • How to pull data from a MySQL column and put it into a single array

    - by Rob
    Basically, this is what I currently use in an included file: $sites[0]['url'] = "http://example0.com"; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; $sites[4]['url'] = "http://example4.com"; $sites[5]['url'] = "http://example5.com"; And so I output it like so: foreach($sites as $s) But I want to make it easier to manage via a MySQL database. So my question is, how can I make it automatically add additional "$sites[x]['url'] = "http://examplex.com";" and output it appropriately from my MySQL table?

    Read the article

  • protect php result

    - by siko
    hii i had simple php file to protect wmv files from downloading and stolen the file is like this <?php // get-file.php $id = (isset($_GET["id"])) ? strval($_GET["id"]) : "1"; // lookup $url[1] = 'http://mlfat.ledawy.net/Files/eng/moslslat/mosque/1.wmv'; $url[2] = 'URL'; $url[3] = 'URL'; header('Content-Type: text/plain'); echo("$url[$id]"); ?> and i embedd the movies like this http:// mlfat.ledawy.net/Files/eng/moslslat/mosque/mosque.php?id=1 but if you call this link you can find the source url can i have some protection ?? tried a lot of adding a php session variable so that the script can not be accessed directly. thank you

    Read the article

  • update data in jqgrid

    - by griZZZly8
    Hi! I uses jqgrid in this scenario: Grid gets JSON data from first url. If url returns correct JSON - grid displays that data. If url returns incorrect url, thet fires 'loadError' event of grid. In this event i want to change url of grid to url2 fnd get JSON data from thus new url. Here is my code. loadError: function(xhr, st, err) { $("#list").setGridParam({ url: '/new_url' }); $("#list").trigger("reloadGrid"); } But it doesnt't works.

    Read the article

  • static setter method injection in Spring

    - by vishnu
    Hi, I have following requirement I wanted to pass http:\\localhost:9080\testws.cls value as setter injection through spring configuration file. How can i do this static variable setter injection for WSDL_LOCATION public class Code1 extends javax.xml.ws.Service { private final static URL CODE1_WSDL_LOCATION; static { URL url = null; try { url = new URL("http:\\localhost:9080\testws.cls"); } catch (MalformedURLException e) { e.printStackTrace(); } CODE1_WSDL_LOCATION = url; } public Code1(URL wsdlLocation, QName serviceName) { super(wsdlLocation, serviceName); } public Code1() { super(CODE1_WSDL_LOCATION, new QName("http://tempuri.org", "Code1")); } /** * * @return * returns Code1Soap */ @WebEndpoint(name = "Code1Soap") public Code1Soap getCode1Soap() { return (Code1Soap)super.getPort(new QName("http://tempuri.org", "Code1Soap"), Code1Soap.class); } } Please help me out.

    Read the article

  • Echo mysql results in a loop?

    - by Roy D. Porter
    I am using turn.js to make a book. Every div within the 'deathnote' div becomes a new page. <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page </div> //ends book What I am doing is trying to get 3 'content' (content being a name and cause of death) divs onto 1 page, and then generate a new page. So here is what i want: <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"> //creates new page but leaves it open <div> CONTENT </div> <div> CONTENT </div> <div> CONTENT </div> </div> //ends the page </div> //ends book Seems simple enough, however the content is data from a MySQL DB, so i have to echo it in using PHP. Here is what i have so far <div id="deathnote"> <div style="background-image:url(images/coverpage.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <?php $pagecount = 0; $db = new mysqli('localhost', 'username', 'passw', 'DB'); if($db->connect_errno > 0){ die('Unable to connect to database [' . $db->connect_error . ']'); } $sql = <<<SQL SELECT * FROM `TABLE` SQL; if(!$result = $db->query($sql)){ die('There was an error running the query [' . $db->error . ']'); } //IGNORE ALL OF THE GARBAGE ABOVE. IT IS SIMPLE CONNECTING SCRIPT THAT I KNOW WORKS //THE METHOD I AM HAVING TROUBLE WITH IS BELOW $pagecount = 0; while($row = $result->fetch_assoc()){ //GETS THE VALUE (and makes sure it isn't nothing echo '<div style="background-image:url(images/paper.jpg);">'; //THIS OPENS A NEW PAGE while ($pagecount !== 3) { //KEEPS COUNT OF HOW MUCH CONTENT DIVS IS ON THE PAGE while($row = $result->fetch_assoc()){ //START A CONTENT DIV echo '<div class="content"><div class="name">' . $row['victim'] . '</div><div class="cod">' . $row['cod'] . '</div></div>'; //END A CONTENT DIV $pagecount++; //UP THE PAGE COUNT } } $pagecount=0; //PUT IT BACK TO 0 echo '</div>'; //END PAGE } $db->close(); ?> <div style="background-image:url(images/backpage.jpg);"></div> //BACK PAGE </div> At the moment i seem to be causing and infinite loop so the page won't load. The problem resides within the while loops. Any help is greatly appreciated. Thanks in advance guys. :)

    Read the article

  • How to write a jUnit test for this class?

    - by flash
    Hi, I would like to know what's the best approach to test the method "pushEvent()" in the following class with a jUnit test. My problem is, that the private method "callWebsite()" always requires a connection to the network. How can I avoid this requirement or refactor my class that I can test it without a connection to the network? class MyClass { public String pushEvent (Event event) { //do something here String url = constructURL (event); //construct the website url String response = callWebsite (url); return response; } private String callWebsite (String url) { try { URL requestURL = new URL (url); HttpURLConnection connection = null; connection = (HttpURLConnection) requestURL.openConnection (); String responseMessage = responseParser.getResponseMessage (connection); return responseMessage; } catch (MalformedURLException e) { e.printStackTrace (); return e.getMessage (); } catch (IOException e) { e.printStackTrace (); return e.getMessage (); } } }

    Read the article

  • Is it possible to use wildcards within J2EE - fitlers?

    - by Sergio del Amo
    I would like to apply a filter to severl url endings. The next configuraiton seems to work. <filter> <filter-name>LanguageFilter</filter-name> <filter-class>filters.LanguageFilter</filter-class> </filter> <filter-mapping> <filter-name>LanguageFilter</filter-name> <url-pattern>*.do</url-pattern> </filter-mapping> <filter-mapping> <filter-name>LanguageFilter</filter-name> <url-pattern>*.xml</url-pattern> </filter-mapping> Originally I asked if it was possible to use wildcards such as: <url-pattern>*.do|*.xml</url-pattern> But it does not seem to be possible.

    Read the article

  • Sort multidimension array in php by more than two fields

    - by Ankita Agrawal
    I want to sort array by two fields. I mean to say I have an array like :- Array ( [0] => Array ( [name] => abc [url] => http://127.0.0.1/abc/img1.png [count] => 69 [img] => accessoire-sets_1.jpg ) [1] => Array ( [name] => abc2 [url] => http://127.0.0.1/abc/img12.png [count] => 73 [img] => ) [2] => Array ( [name] => abc45 [url] => http://127.0.0.1/abc/img122.png [count] => 15 [img] => tomahawk-kopen_1.png ) [3] => Array ( [name] => zyz [url] => http://127.0.0.1/abc/img22.png [count] => 168 [img] => ) [4] => Array ( [name] => lmn [url] => http://127.0.0.1/abc/img1222.png [count] => 10 [img] => ) [5] => Array ( [name] => qqq [url] => http://127.0.0.1/abc/img1222.png [count] => 70 [img] => ) [6] => Array ( [name] => dsa [url] => http://127.0.0.1/abc/img1112.png [count] => 43 [img] => ) [7] => Array ( [name] => wer [url] => http://127.0.0.1/abc/img172.png [count] => 228 [img] => ) [8] => Array ( [name] => hhh [url] => http://127.0.0.1/abc/img126.png [count] => 36 [img] => ) [9] => Array ( [name] => rrrt [url] => http://127.0.0.1/abc/img12.png [count] => 51 [img] => ) [10] => Array ( [name] => yyy [url] => http://127.0.0.1/abc/img12.png [count] => 22 [img] => ) [11] => Array ( [name] => cxz [url] => http://127.0.0.1/abc/img12.png [count] => 41 [img] => ) [12] => Array ( [name] => tre [url] => http://127.0.0.1/abc/img12.png [count] => 32 [img] => ) [13] => Array ( [name] => fds [url] => http://127.0.0.1/abc/img12.png [count] => 10 [img] => ) ) array WITHOUT images (field "img" )should always be placed underneath array WITH images. After this there will be sorted on the amount of products (field count) in the array. Means I have to show sort array first on the basis of img then count. I am using usort( $childLinkCats, 'sortempty' );` function sortempty( $a, $b ) { return empty( $a['img'] ); } it will show array with image value above the one who contains null value. and to sort through count Im using usort($childLinkCats, "_sortByCount"); function _sortByCount($a, $b) { return strnatcmp($a['count'], $b['count']); } It will short by count But I am facing problem that only 1 working is working at a time, but I have to use both, please help me.

    Read the article

  • Add the list content at the end???

    - by rajesh
    Hi Actually my code is--- function addElement(url,imagePath){ alert(url); alert(imagePath); var container = document.getElementById('sncs'); var new_element = document.createElement('li'); new_element.innerHTML ="<a href='#'><img src='"+imagePath+"' alt='' title='' width='466' height='165' onClick=\"javascript:addWidget('"+url+"')\"></a>"; new_element.className="jcarousel-item jcarousel-item-horizontal jcarousel-item-4 jcarousel-item-4-horizontal"; container.insertBefore(new_element, container.firstChild); } function addWidget(url) { alert(url); // var url='http://www.boxyourtvtrial.com/songs/public/'; var main= document.getElementById('mainwidget'); main.innerHTML = "<iframe src='"+url+"' align='left' height='1060px' width='576px' scrolling='no' frameborder='0' id='lodex'></iframe>"; } but when we add the image itsnot going at the but is going in front... So please provid the solution

    Read the article

  • Multiple conditions with CASE statements

    - by Pavan Reddy
    I need to query some data. here is the query that i have constructed but which isn't workig fine for me. For this example I am using AdventureWorks database. SELECT * FROM [Purchasing].[Vendor] WHERE PurchasingWebServiceURL LIKE case // In this case I need all rows to be returned if @url is '' or 'ALL' or NULL when (@url IS null OR @url = '' OR @url = 'ALL') then ('''%'' AND PurchasingWebServiceURL IS NULL') //I need all records which are blank here including nulls when (@url = 'blank') then (''''' AND PurchasingWebServiceURL IS NULL' ) //n this condition I need all record which are not like a particular value when (@url = 'fail') then ('''%'' AND PurchasingWebServiceURL NOT LIKE ''%treyresearch%''' ) //Else Match the records which are `LIKE` the input value else '%' + @url + '%' end This is not working for me. How can I have multiple where condition clauses in the THEN of the the same CASE? How can I make this work?

    Read the article

  • RemoteWebDriver doesn't work with xpath

    - by questions
    I'm trying to use RemoteWebDriver with xpath locators on google.com, this is the log from node running firefox. It receives all those commands but doesn't executes them. I dont see any activity with browser, other than opening google homepage. 14:05:05.671 INFO - Executing: [get: http://google.com] at URL: /session/1341695 049401/url) 14:05:06.260 INFO - Done: /session/1341695049401/url 14:05:06.301 INFO - Executing: [find element: By.xpath: //*[@id="gbqfqw"]] at UR L: /session/1341695049401/element) 14:05:06.453 INFO - Done: /session/1341695049401/element 14:05:06.495 INFO - Executing: [send keys: 0 org.openqa.selenium.support.events. EventFiringWebDriver$EventFiringWebElement@74d5f412, [StackOverflow]] at URL: /se ssion/1341695049401/element/0/value) 14:05:06.796 INFO - Done: /session/1341695049401/element/0/value 14:05:06.822 INFO - Executing: [find element: By.xpath: //*[@id="gbqfb"]] at URL : /session/1341695049401/element) 14:05:06.935 INFO - Done: /session/1341695049401/element 14:05:06.987 INFO - Executing: [click: 1 org.openqa.selenium.support.events.Even tFiringWebDriver$EventFiringWebElement@7b64218d] at URL: /session/1341695049401/ element/1/click) 14:05:09.627 INFO - Executing: org.openqa.selenium.remote.server.handler.Status@ 6349a3ca at URL: /status) 14:05:09.627 INFO - Done: /status I tried with By.name(q) and it works.

    Read the article

  • Help with redirect and query strings

    - by James
    I'm a total novice at mod rewrite so I'll try and present my question as clearly as possible: I'm trying to create a url redirect of the following (static) affiliate url that can append it self to any product links after using a query string: affiliate url: hxxp://clk.affilite.com/fs-bin/click?id=aFb*BBBBBpQ&subid=&offerid=9999.2&type=5&tmpid=9999&RD_PARM1= product url: hxxp:// example.domain.com What I want to achieve is redirecting the affiliate code as below and being able to add dynamic product urls after as the following examples show: rewritten affiliate url: hxxp://domain.com/go affiliate url + product url: hxxp://domain.com/go?=http://example.domain.com redirects to: hxxp://clk.affilite.com/fs-bin/click?id=aFb*BBBBBpQ&subid=&offerid=9999.2&type=5&tmpid=9999&RD_PARM1=http://example.domain.com

    Read the article

  • You have an error in your SQL syntax; check the manual that corresponds to your MySQL

    - by LuisEValencia
    I am trying to run a mysql query to find all occurences of a text. I have a syntax error but dont know where or how to fix it I am using sqlyog to execute this script DECLARE @url VARCHAR(255) SET @url = '1720' SELECT 'select * from ' + RTRIM(tbl.name) + ' where ' + RTRIM(col.name) + ' like %' + RTRIM(@url) + '%' FROM sysobjects tbl INNER JOIN syscolumns col ON tbl.id = col.id AND col.xtype IN (167, 175, 231, 239) -- (n)char and (n)varchar, there may be others to include AND col.length > 30 -- arbitrary min length into which you might store a URL WHERE tbl.type = 'U' -- user defined table 1 queries executed, 0 success, 1 errors, 0 warnings Query: declare @url varchar(255) set @url = '1720' select 'select * from ' + rtrim(tbl.name) + ' where ' + rtrim(col.name) + ' like %' ... Error Code: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'declare @url varchar(255)

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • Codeigniter Twitter Library : What is my Callback URL. Where can i find it?

    - by Tapha
    I would like to know where i can find my callback url in ci? Im quite new to it so not really sure. Here is the lib im using. <?php class Home extends Controller { function Home() { parent::Controller(); } public function index() { // This is how we do a basic auth: // $this->twitter->auth('user', 'password'); // Fill in your twitter oauth client keys here $consumer_key = ''; $consumer_key_secret = ''; // For this example, we're going to get and save our access_token and access_token_secret // in session data, but you might want to use a database instead :) $this->load->library('session'); $tokens['access_token'] = NULL; $tokens['access_token_secret'] = NULL; // GET THE ACCESS TOKENS $oauth_tokens = $this->session->userdata('twitter_oauth_tokens'); if ( $oauth_tokens !== FALSE ) $tokens = $oauth_tokens; $this->load->library('twitter'); $auth = $this->twitter->oauth($consumer_key, $consumer_key_secret, $tokens['access_token'], $tokens['access_token_secret']); if ( isset($auth['access_token']) && isset($auth['access_token_secret']) ) { // SAVE THE ACCESS TOKENS $this->session->set_userdata('twitter_oauth_tokens', $auth); if ( isset($_GET['oauth_token']) ) { $uri = $_SERVER['REQUEST_URI']; $parts = explode('?', $uri); // Now we redirect the user since we've saved their stuff! header('Location: '.$parts[0]); return; } } // This is where you can call a method. $this->twitter->call('statuses/update', array('status' => 'Testing CI Twitter oAuth sexyness by @elliothaughin')); // Here's the calls you can make now. // Sexy! /* $this->twitter->call('statuses/friends_timeline'); $this->twitter->search('search', array('q' => 'elliot')); $this->twitter->search('trends'); $this->twitter->search('trends/current'); $this->twitter->search('trends/daily'); $this->twitter->search('trends/weekly'); $this->twitter->call('statuses/public_timeline'); $this->twitter->call('statuses/friends_timeline'); $this->twitter->call('statuses/user_timeline'); $this->twitter->call('statuses/show', array('id' => 1234)); $this->twitter->call('direct_messages'); $this->twitter->call('statuses/update', array('status' => 'If this tweet appears, oAuth is working!')); $this->twitter->call('statuses/destroy', array('id' => 1234)); $this->twitter->call('users/show', array('id' => 'elliothaughin')); $this->twitter->call('statuses/friends', array('id' => 'elliothaughin')); $this->twitter->call('statuses/followers', array('id' => 'elliothaughin')); $this->twitter->call('direct_messages'); $this->twitter->call('direct_messages/sent'); $this->twitter->call('direct_messages/new', array('user' => 'jamierumbelow', 'text' => 'This is a library test. Ignore')); $this->twitter->call('direct_messages/destroy', array('id' => 123)); $this->twitter->call('friendships/create', array('id' => 'elliothaughin')); $this->twitter->call('friendships/destroy', array('id' => 123)); $this->twitter->call('friendships/exists', array('user_a' => 'elliothaughin', 'user_b' => 'jamierumbelow')); $this->twitter->call('account/verify_credentials'); $this->twitter->call('account/rate_limit_status'); $this->twitter->call('account/rate_limit_status'); $this->twitter->call('account/update_delivery_device', array('device' => 'none')); $this->twitter->call('account/update_profile_colors', array('profile_text_color' => '666666')); $this->twitter->call('help/test'); */ } } /* End of file welcome.php */ /* Location: ./system/application/controllers/home.php */ Thank you all

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >