Search Results

Search found 33009 results on 1321 pages for 'google index'.

Page 483/1321 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • emacs keybindings

    - by Max
    I read a lot about vim and emacs and how they make you much more productive, but I didn't know which one to pick. Finally when I decided to teach myself common lisp, the decision was straight forward: everybody says that there's no better editor for common lisp, than emacs + slime. So I started with emacs tutorial and immediately I ran into something that seems very unproductive to me. I'm talking about key bindings for cursor keys: forward/backward: Ctrl+f, Ctrl+b up/down: Ctrl+p, Ctrl+n I find these bindings very strange. I assume that fingers should be on their home rows (am I wrong here?), so to move cursor forward or backward I should use my left index finger and for up and down right pinky and right index fingers. When working with any of Windows IDEs and text editors to navigate text I usually place my right hand in a position so that my thumb is on the right ctrl and my index, ring and middle fingers are on the cursor keys. From this position it is very easy and comfortable to move cursor: I can do one-character moves with my 3 right fingers, or I can press ctrl with my right thumb and do word-moves instead. Also I can press shift with my left pinky and do single-character or word selections. Also it is a very comfortable position to reach PgUp, PgDn, Home, End, Delete and Backspace keys with my right hand. So I have even more navigation and selection possibilities. I understand that the decision not to use cursor keys is to allow one to use emacs to connect to remote terminal sessions, where these keys are not supported, but I still find the choice of cursor keys very unfortunate. Why not to use j, k, i, l instead? This way I could use my right hand without much finger stretching. So how is emacs more productive? What am I doing wrong?

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • exception occurred for backend host '127.0.0.1/7001/7001': 'CONNECTION_REFUSED [os error=0, line 1715 of URL.cpp]: Error connecting to host 127.0.0.1:7001'

    - by Vijaya Moderator -Oracle
    When you hit the WLS Server via Proxy server, we may get the issue like below... [09/Jun/2014:06:05:50] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/0': 'PROTOCOL_ERROR [line 835 of URL.cpp]: Backend Server not responding' [09/Jun/2014:06:05:50] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/0': 'PROTOCOL_ERROR [line 835 of URL.cpp]: Backend Server not responding' [09/Jun/2014:06:05:51] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/7001': 'CONNECTION_REFUSED [os error=0, line 1715 of URL.cpp]: Error connecting to host 127.0.0.1:7001' To solve the issue 1.  Check if there is any issue at the firewall by executing the below command     telnet 127.0.0.1 7001 2.  Also access  using the hostname instead of IP Address  If it errors out like below   Microsoft Telnet> o 127.0.0.1 7001 connecting to 127.0.0.1  Warning thrown at weblogic console: <BEA-000449> <Closing socket as no data read from it on 127.0.0.1:54,356 during the configured idle timeout of 5 secs>  Test the same again by disabling the firewall. Below are the steps... 1)Go to Start ----> Run  2)Type services.msc  3)In services.msc search for "Windows Firewall" and Right Click on "Windows Firewall"---->Select Stop. 4)Access the Weblogic URL  If the issue is resolved after disabling firewall, Request your  OS System Admin to open the port 7001 at the firewall end to access the application

    Read the article

  • Jet Brains release WebStorm 5.0

    - by TATWORTH
    At http://www.jetbrains.com/webstorm/whatsnew/index.html?WS50ROW, Jet Brains have announced the release of WebStorm 5.0, an IDE that brings the ease of code writing in VB.NET and C# that you get with ReSharper, to JavaScript, CSS and LESS. (There are some more details in http://blog.jetbrains.com/webide/2012/08/liveedit-plugin-features-in-detail/)Code completion in JavaScript, CSS and LESS is a very welcome feature. I look forward to trying out Web Storm. The download at http://www.jetbrains.com/webstorm/download/index.html comes with a free 30-day trial).Price information is at http://www.jetbrains.com/webstorm/buy/index.jsp - you should note that if you are an open-source developer, you can apply for a free license. The price of a personal license at £23 + VAT is a no-brainer. The price of a Commercial license would have been paid for in a few days of the increased productivity that this tool brings.Web Storm currently requires Google Chrome to run. Like ReSharper it appears to be a very able tool. It includes tools such as:XSLT debuggingJSLint for checking for JavaScript errorsJavaScript debuggingJavaScript unit testing (including code coverage)JavaScript folding regionsCoffeeScript supportWell I suggest that you try WebStorm 5.0

    Read the article

  • Is this the correct approach to an OOP design structure in php?

    - by Silver89
    I'm converting a procedural based site to an OOP design to allow more easily manageable code in the future and so far have created the following structure: /classes /templates index.php With these classes: ConnectDB Games System User User -Moderator User -Administrator In the index.php file I have code that detects if any $_GET values are posted to determine on which page content to build (it's early so there's only one example and no default): function __autoload($className) { require "classes/".strtolower($className).".class.php"; } $db = new Connect; $db->connect(); $user = new User(); if(isset($_GET['gameId'])) { System::buildGame($gameId); } This then runs the BuildGame function in the system class which looks like the following and then uses gets in the Game Class to return values, such as $game->getTitle() in the template file template/play.php: function buildGame($gameId){ $game = new Game($gameId); $game->setRatio(900, 600); require 'templates/play.php'; } I also have .htaccess so that actual game page url works instead of passing the parameters to index.php Are there any major errors of how I'm setting this up or do I have the general idea of OOP correct?

    Read the article

  • URL is generating a /#!/splash-page

    - by user32642
    My site for some reason is generating a shebang - /#!/splash-page on the URL. For example when I type www.modernvintage1005.com, the browser returns www.modernvintage1005.com/#!/splash-page and every subsequent page is /#!/about, /#!/contact, and so forth. There's absolutely nothing on the Google about this. There is a lot of rewrite help to eliminate .index.php from the home page, but that's it. How do I rewrite it to just say domain.com and domain.com/about.html, etc.? Here is my .htaccess file if you need to see it. # Rewrite Rule <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # compress text, html, javascript, css, xml: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddType x-font/otf .otf AddType x-font/ttf .ttf AddType x-font/eot .eot AddType x-font/woff .woff AddType image/x-icon .ico AddType image/png .png </IfModule> ## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType text/x-javascript "access 1 month" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 2 days" </IfModule> ## EXPIRES CACHING ##

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • What is wrong with my speculair phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • JQueryMobile - Problems with dialog boxes [closed]

    - by Richard van Hees
    I'm programming in JQueryMobile, but I can't seem to get some things as I want. First it's good to tell I am mostly programming in a multi-page template. I have a login function in the web based app. The idea is that the user sees he's not logged in and the user can click on the button to log in. A dialog box pops up, in which the user can enter his credentials. This dialog box is in front of the previous page, in my case just index.php. The page for profile is at profile.php#profile. In this case the url for the dialog box is index.php#profile&ui-state=dialog. Don't ask me why, that's how JQueryMobile works, I guess. Anyway, after the user clicks on 'Login' in the pop-up, I want a new dialog to pop-up in which it says you are logged in and I want the content of the page behind it (index.php#profile) to refresh. Of course I want this all to move very smooth and no refreshing of the whole page, to prevent loading time and thus a blank screen for a second. In short: User not logged in Clicks on login Dialog pops up with form Clicks login New dialog pops up with 'success' (or whatever) in the same style as the previous dialog Clicks ok Page behind the dialogues has been refreshed without user noticing Also another thing that doesn't really work out for me: I can't seem to get a dialog to pop up, triggered by an action in another dialog. It just appears as a normal page.

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • mod_rewrite not working after upgrade to 12.10

    - by CrowderSoup
    I'm hoping this is a quick and simple fix and that I just need a fresh set of eyes. However, I'm fearful that it might actually be an error in the latest build of the rewrite module. I have a .htaccess file that turns on the rewrite engine (I've made sure the module is enabled), creates some rewrite conditions, and finally a rewrite rule. Here's my .htaccess file for reference: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?request=$1 [L,QSA,NC] </IfModule> Now for the problem: if I go to hostname.com it works fine. If I go to hostname.com/Index it works fine. However, if I go to hostname.com/index it doesn't rewrite the request and I get a 404. I'm not sure what's going on here. I've used a rewrite rule tester and there doesn't appear to be any issues with my rewrite rule itself. Again, this issue didn't manifest until after I upgraded to 12.10, at which point I know that Apache was updated. Any thoughts? Has anyone else here experienced this? I know that two other people besides myself have experienced this here. Thanks in advance for any help you can provide!

    Read the article

  • Oracle Open World 2012?????

    - by Liu Maclean(???)
    Oracle Open World 2012?????: ???.. Oracle OpenWorld 2012 sessions????:Search Content Catalog for Oracle OpenWorld 2012 sessions ?????????session??? Open World 2012??: Larry ??Exadata X3 OOW 2012???Exadata X3,?? X3-2 ?Expansion Rack X3-2?X3-8 Exadata X3????:http://www.oracle.com/us/products/database/exadata/overview/index.html  ORACLE EXADATA Database MACHINE X3-8 sheetORACLE EXADATA Database MACHINE X3-2 sheet Exadata X3-2???????: X3-2?compute db node?????????8?Intel Xeon E5-2690??? ??????????12????16?,???33%????? ???96GB???128GB,????256GB ??????????50% X3-2 cell node??????????????Intel Xeon ??????flash card flash card??????4?,??flash card?????????40%? ???X3-2???22.4TB?flash ,??????flash????????????????????,???10????? CPU???6?,????????Intel Xeon model ????????X2-2??,???600GB???????3TB?????? ??Exadata X3-2?????????,??????????1/4?????,1/8????????????????? Exadata X3-8???????: X3-8???X2-8?????,???X3-8??????????X3-2??,??X3-8?????22.4TB?????? ???CEO??  Engineered to Work Together OOW????? Oracle Open World 2012 ????? Open World 2012 ??:http://www.oracle.com/openworld/index.htmlOpen World 2012 ????:http://www.oracle.com/openworld/register/packages/index.html ??: Sept. 30 – Oct. 4, 2012 9?30?? 10?4? ??:Moscone Center, San Francisco (747 Howard Street, San Francisco, California 94103). ?????Mark Hurd??OOW 2012: How big is oow OOW 2012?????????: Focus On Database Technologies Focus On Real Application Clusters Focus On Exadata Focus On Oracle Database Appliance Focus On Oracle Database Application Development Focus On Oracle Database Security Focus On Big Data Focus On Data Warehousing Focus On High Availability Focus On Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Focus On Oracle Spatial and Graph Focus On Oracle Database Utilities Focus On Oracle Database Upgrade Focus On Oracle Database Private Cloud Focus On .Net Focus On Oracle Database on Windows Focus On Engineered Systems Focus On Sunday Users Forum

    Read the article

  • 11g R2 on Windows???????!

    - by [email protected]
    Oracle Database?????????11g Release2(R2)?????????????Windows???????! ????Windows 7? Windows Server 2008 R2????????11g R2 for Windows????????OTN???????11gR2??????????????????????????????? ??????????????! OTN??????????????????????????????????????????????????????????????(?)????????????????????????????????????? http://www.oracle.com/technology/global/jp/software/products/database/index.html ? ???????????????????????????????????????????????????????????????? ??????????????????????? ???????! ??????????????????·??·??????????·??????·RAC????????????? GUI(Oracle Enterprise Manager)????????????···?????????????2??xx????? ??? http://download.oracle.com/docs/cd/E16338_01/index.htm????????????????????? ???????! ???PC?????????Oracle Direct Seminar?????????????????????????????11gR2?????????????????????? ???????????3???????? 2010?4?28?(?) 11:00~12:00 Windows 7 / Windows Server 2008 R2 ?????? Oracle ! 2010?5?26?(?) 11:00~12:00 ? ????DB??????????11g R2??? 2010?5?19?(?) 18:00~20:00 [Oracle Evening Seminar]Oracle Database 11g Release2?Windows Server 2008 R2/Windows7?????! 2010?5?22?(?) 13:00~17:00 [Oracle Weekend Seminar] ????????! Oracle Database ??? ????????????????????????????????! ????????????? ????????????????????? http://www.oracle.com/technology/global/jp/documentation/database.html OTN???????11gR2 ??????????????????????????????????????????????????????? ??? ????????????????????????????????????????????????????????????? ???????? ?????? ?????? FAQ ??????????????FAQ?????????? Coming Soon! 11gR2 on Windows ???????????????5???????????????????? ???????!???????????????????????????????????????··! ???????Oracle Database??????????????????OTN???????????????! ???????????????http://www.oracle.com/technology/global/jp/membership/index.html

    Read the article

  • Positioned element adding to total div height [migrated]

    - by Max
    I have a 800 x 600 rotatable image with forward and back buttons repositioned to the sides. The height of the div is suppose to be 600px, but the actual height of the div was pushed to 690, including the height of the button image. And the div was blocking a row of clickable menu on top. So I made the div height to 518px and moved top -75px to get the real dimension I want. But I feel dirty doing this... Is there a correct way to do this? Or is this workaround more or less correct? Below is the code. Thanks! <div class="Content Wide" id="LayoutColumn1"> <div style=" width: 980px; height: 518px; display: block; position: relative; float: left;"> <a href="#" onClick="prev();"><img src="/template/img/next_button.png" style="position: relative; top: 200px; left: 5px; z-index: 2;"></a> <a href="/chef-special/" id="mainLink"><img src="/template/img/chef_special_large.png" id="main" style="margin: 0 0 0 50px; position: relative; float: left; top: -75px; z-index: 1;"></a> <a href="#" onClick="next();"><img src="/template/img/next_button.png" style="position: relative; top: 200px; left: 787px; z-index: 2;"></a> </div> </div>

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • How to use wkhtmltopdf.exe in ASP.net

    - by David Murdoch
    After 10 hours and trying 4 other HTML to PDF tools I'm about ready to explode. wkhtmltopdf sounds like an excellent solution...the problem is that I can't execute a process with enough permissions from asp.net so... Process.Start("wkhtmltopdf.exe","http://www.google.com google.pdf"); starts but doesn't do anything. Is there an easy way to either: -a) allow asp.net to start processes (that can actually do something) or -b) compile/wrap/whatever wkhtmltopdf.exe into somthing I can use from C# like this: WkHtmlToPdf.Save("http://www.google.com", "google.pdf");

    Read the article

  • Bug with asp.net mvc2, collection and ValidationMessageFor ?

    - by Rafael Mueller
    I'm with a really weird bug with asp.net mvc2 (rtm) and ValidationMessageFor. The example below throws a System.Collections.Generic.KeyNotFoundException on Response.Write(Html.ValidationMessageFor(n = n[index].Name)); Any ideas why thats happening? I'm accessing the same dictionary twice before, but it only throws the error on ValidationMessageFor, any thoughts? Here's the code. public class Parent { public IList<Child> Childrens { get; set; } [Required] public string Name { get; set; } } public class Child { [Required] public string Name { get; set; } } The controller public class ParentController : Controller { public ActionResult Create() { var parent = new Parent { Name = "Parent" }; parent.Childrens = new List<Child> { new Child(), new Child(), new Child() }; return View(parent); } } The view (Create.aspx) <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Parent>" %> <%@ Import Namespace="BugAspNetMVC" %> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <%Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <%=Html.LabelFor(p => p.Name)%> <%=Html.TextBoxFor(p => p.Name)%> <%=Html.ValidationMessageFor(p => p.Name)%> <%Html.RenderPartial("Childrens", Model.Childrens); %> <%} %> </asp:Content> And the partial view (Childrens.ascx) <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IList<Child>>" %> <%@ Import Namespace="BugAspNetMVC"%> <% for (int i = 0; i < Model.Count; i++) { var index = i; Response.Write(Html.LabelFor(n => n[index].Name)); Response.Write(Html.TextBoxFor(n => n[index].Name)); Response.Write(Html.ValidationMessageFor(n => n[index].Name)); } %>

    Read the article

  • Android: Download the Android SDK components for offline install

    - by Tawani
    Is it possible to download the Android SDK components for offline install without using the SDK Manager? The problem is I am behind a firewall which I have no control over and both sites download URLs seem to be blocked (throws a connection refused exception) https://dl-ssl.google.com/android/repository/repository.xml http://dl-ssl.google.com/android/repository/repository.xml Failed to fetch URL http://dl-ssl.google.com/android/repository/repository.xml, reason: Connection refused: connect

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What is PubSubHubbub?

    What is PubSubHubbub? An overview of pubsubhubbub, a simple, open, web-hook-based pubsub protocol & open source reference implementation. Brett Slatkin and Brad Fitzpatrick demonstrate what pubsubhubbub is and how it works. For more information, visit pubsubhubbub.googlecode.com From: GoogleDevelopers Views: 23753 112 ratings Time: 03:20 More in Science & Technology

    Read the article

  • Chrome Apps Office Hours: Chrome Storage APIs

    Chrome Apps Office Hours: Chrome Storage APIs Ask and vote for questions: goo.gl You spoke, we listened. Join Paul Kinlan, Paul Lewis, Pete LePage, and Renato Dias to learn about the new storage APIs that are available to Chrome Packaged Apps in the next installment of Chrome Apps Office Hours. We'll take a look at the new sync-able and local storage APIs as well as other ways you can save data locally on your users machine. We didn't get through quite as many questions as we hoped last week, and are going to dedicate some extra time this week, so be sure to post your questions on Moderator below! From: GoogleDevelopers Views: 0 9 ratings Time: 00:00 More in Science & Technology

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >