Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 293/617 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Jquery website is not opening in UBUNTU but in XP, Everything is fine

    - by Raman Sethi
    I know it is weird, But I just discovered this, jquery.com is not opening in my ubuntu firefox or other KDE browser and hence many sites that copy codes from code.jquery.com also hanged. Is there any solution to this problem. I have found the problem It is actually with the DNS servers I am using, Google DNS, 8.8.8.8 and 8.8.4.4, whenever I use these DNS in ubuntu my system stop responding to some sites, actually they are connected nicely, but the request end up in waiting.. I dont understand why...??? I checked my DNS with cat /etc/resolv.conf Even after using Google DNS, it is showing DNS servers I received automatically after connecting to the service provider. I am connecting using Network Manager, not using DNS I provided but using the default one. Any Solution??

    Read the article

  • Complete RESTful API debugging/testing tool

    - by vartec
    I'm looking for the most complete tool, preferably portable GUI or browser plugin to test RESTful API. What I need is: GET/POST/DELETE/PUT support multiple file uploads as fields (multipart/form-data) file uploads as body Extra points for: possibility to save multiple configurations and use them to pre-fill parameters OAuth support nice JSON response formatting Currently I'm using 3 tools: Chrome REST Console extension — My favorite, very nicely done. Has OAuth. However the functionality missing for me is sending file as a body of the request; Cannot send multiple files; Firefox Poster add-on — Quite nice, but the functionality it's missing for file as POST fields parameters; Also cannot send multiple files; cURL — can do anything, but it's quite tedious to use it from command line.

    Read the article

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

  • Why do I need to relaunch dkpg-reconfigure keyboard-configuration after every startup?

    - by yves Baumes
    I've switched recently to an UK keyboard, from a FR keyboard. It is an usb keyboard, and first I plugged it in, I had to launch dpkg-reconfigure keyboard-configuration in order to request the correct layout (ie: uk). But after every reboot of the machine, the laytout comes back to the FR layout. How can I make the modification persisting? I am using Ubuntu, running the Stump Window Manager. And here is the /etc/default/keyboard at any time (that is right after startup, and before and after I run the dpkg-reconfigure tool) XKBMODEL="hhk" XKBLAYOUT="gb" XKBVARIANT="" XKBOPTIONS="terminate:ctrl_alt_bksp"

    Read the article

  • Join Companies in Web and Telecoms by Adopting MySQL Cluster

    - by Antoinette O'Sullivan
    Join Web and Telecom companies who have adopted MySQL Cluster to facilitate application in the following areas: Web: High volume OLTP eCommerce User profile management Session management and caching Content management On-line gaming Telecoms: Subscriber databases (HLR/HSS) Service deliver platforms VAS: VoIP, IPTV and VoD Mobile content delivery Mobile payments LTE access To come up to speed on MySQL Cluster, take the 3-day MySQL Cluster training course. Events already on the schedule include:  Location  Date  Delivery Language  Berlin, Germany  16 December 2013  German  Munich, Germany  2 December 2013  German  Budapest, Hungary  4 December 2013  Hungarian  Madrid, Spain  9 December 2013  Spanish  Jakarta Barat, Indonesia  27 January 2014  English  Singapore  20 December 2013  English  Bangkok, Thailand  28 January 2014  English  San Francisco, CA, United States  28 May 2014  English  New York, NY, United States  17 December 2013  English For more information about this course or to request an additional event, go to the MySQL Curriculum Page (http://education.oracle.com/mysql).

    Read the article

  • How to Access Metro Apps from Windows Explorer in Windows 8

    - by Taylor Gibb
    Windows 8 comes with its new Metro Start Screen, which makes it easy to launch your Metro apps from that screen, but did you know you can access them from Windows Explorer too? Here’s how to do it. To get started you need to create a shortcut, so right-click on the desktop, and choose New –>  Shortcut. When you are asked for the location of the item, use the following: The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Potential issues with multiple home pages

    - by Maxim Zaslavsky
    I have a site where I want to have two different home pages: a general description page for anonymous users, and a dashboard page for logged-in users. I am debating between two implementations: Both pages live at / The page for anonymous users is located at / and the dashboard is at /dashboard, with automatic redirection between them based on whether a given user is logged in (e.g., if you're logged in and navigate to /, you are redirected to /dashboard. Is it cleaner to have both pages use the same URL or separate URLs? Also, I imagine that choices for that question will affect the following: Caching: the anonymous page would be completely cached, while the logged-in page would not be cached at all (except for static resources). This could lead to issues with server caching, request speed, and UX (such as if one version of the page is cached in a user's browser when the other version should be displayed, instead). SEO: how would search engines react to such canonical URLs? Load time (due to redirects or to the server having to always reevaluate which page to display)

    Read the article

  • Why after each restart, my local .NET sites take time to load for the first time?

    - by Saeed Neamati
    I'm developing sites based on .NET platform. I usually deploy these sites on my local IIS, so that I can test them and see their functionality before going live. However, each time I restart windows, it seems that sites take a long time to run for the first time. I know about JIT and I'm also aware of this question, but it doesn't answer my question. Does JIT happens every time you restart windows? Is it related to creation of w3wp.exe process? Why sites are so slow for the first request after each restart?

    Read the article

  • Python response parse [migrated]

    - by Pavel Shevelyov
    When I'm sending some data on host: r = urllib2.Request(url, data = data, headers = headers) page = urllib2.urlopen(r) print page.read() I have something like this: [{"command":"settings","settings":{"basePath":"\/","ajaxPageState":{"theme":"spsr","theme_token":"kRHUhchUVpxAMYL8Y8IoyYIcX0cPrUstziAi8gSmMYk","css":[]},"ajax":{"edit-submit":{"callback":"spsr_calculator_form_ajax","wrapper":"calculator_form","method":"replaceWith","event":"mousedown","keypress":true,"url":"\/ru\/system\/ajax","submit":{"_triggering_element_name":"submit"}}}},"merge":true},{"command":"insert","method":null,"selector":null,"data":"\u003cdiv id=\"calculator_form\"\u003e\u003cform action=\"\/ru\/service\/calculator\" method=\"post\" id=\"spsr-calculator-form\" accept-charset=\"UTF-8\"\u003e\u003cdiv\u003e\u003cinput id=\"edit-from-ship-region-id\" type=\"hidden\" name=\"from_ship_region_id\" value=\"\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_build_id\" value=\"form-0RK_WFli4b2kUDTxpoqsGPp14B_0yf6Fz9x7UK-T3w8\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_id\" value=\"spsr_calculator_form\" \/\u003e\n\u003c\/div\u003e\n\u003cdiv class=\"bg_p\"\u003e \n\u0421\u0435\u0439\u0447\u0430\u0441 \u0412\u044b... bla bla bla but I want have something, like this: <html><h1>bla bla bla</h1></html> How can I do it?

    Read the article

  • How to Name Groups of Apps on the Windows 8 Metro Start Screen

    - by Taylor Gibb
    The Windows 8 Start Screen certainly takes some getting use to, however, one of the things that I really miss about the Start Menu was how i was able to categorize my installed applications. While you cant create folders on the Start Screen, you can group your applications. To get started head over to the Metro Start Screen and move your mouse to the bottom right-hand corner, clicking on the small icon. Now right click on the group of apps that you want to name. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • "PHP: Good Parts"-ish book / reference

    - by julkiewicz
    Before I had my first proper contact with Javascript I read an excellent book "Javascript: The Good Parts" by Douglas Crockford. I was hoping for something similar in case of PHP. My first thought was this book: "PHP: The Good Parts" from O'Reilly However after I read the reviews it seems it totally misses the point. I am looking for a resource that would: concentrate on known shortcommings of PHP, give concrete examples, be as exhaustive as possible I already see that things can go wrong. If you want to close this question: Please consider this, I looked through SO, and Programmers for materials. I obviously found this question: http://stackoverflow.com/questions/90924/what-is-the-best-php-programming-book It's general, mine is specific. Moreover I'm reading the top recommendation "PHP Objects, Patterns, and Practice" right now. I find it insufficient -- it doesn't address the bad practices as much as I would like it to. tl;dr My question is NOT a general PHP book request.

    Read the article

  • GnuPG v1.4.10 cant read

    - by Nicky Bailuc
    I was thinking of "Getting Involved" with Ubuntu and decided to join the Bug Squad. After a few steps of joining, I got an email with a OpenPGP key licenced at Gnu version 1.4.10 and went on their website. They recommended me a couple of apps that can decode the code, but none of them worked, in fact each of them said that the code is corrupted. I thought that there's something wrong with that email, so I resent the request. As I got another email and tried the code with those programs, the programs said the same thing. How can I decode the code properly? Is there an online way of doing it? Any suggestions?

    Read the article

  • Auto-Invoke Update Manager to update everything and shutdown after system idle for x minutes?

    - by unknownthreat
    I have Ubuntu 10.10 installed on a machine for my parents. The thing is they never request updates from Update Manager even the manager itself prompted them so. Moreover, when they are done with whatever they are doing on Ubuntu, they always leave the computer on. And I always have to come back and shut the machine down. Sometimes, the computer even sit idle for hours. So I want to know whether this is possible in Ubuntu. I am thinking of a script that will be activated after the machine is idle for x minutes. When x minutes have elapsed, Update Manager will automatically update everything listed. (I recall that you need the admin password for this, so is there a workaround?) After all the updates are done, the machine will automatically shutdown. Is this possible?

    Read the article

  • Loud and annoying noise on login

    - by searchfgold6789
    I have a PC with Kubuntu 13.10 64-bit on it. The problem is that whenever I log in (automatic log in is enabled), there is a loud double-click noise that sounds from the speakers whether the volume is muted or not. I have two sound cards; the motherboard audio, which is disabled in the BIOS, and the Creative! Labs Sound Blaster X-Fi SB0460, which I have normal speakers plugged into. Does anyone know how to fix this? Relative lspci lines: 00:01.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] BeaverCreek HDMI Audio [Radeon HD 6500D and 6400G-6600G series] 02:05.0 Multimedia audio controller: Creative Labs SB X-Fi Using default Phonon backend. (I am not really sure what other information to provide, but will gladly edit in anything upon request.)

    Read the article

  • What are good JS libraries for game dev?

    - by acidzombie24
    If I decide to write a simple game both text and graphical (2d) what libraries would I use? (Assume we are using a HTML5 compatible browser) The main things I can think of Rendering text on screen Animating sprites (using images/css) Input (capturing the arrow keys and getting relative mouse positions) Perhaps some preloading resource or dynamically loading resources and choosing order Sound (but I am unsure how important this will be to me at first). Perhaps with mixing and chaining sounds or looping forever until stop. Networking (low priority) to connect a user to another or to continuously GET data without multiple request (I know this exist but I don't know how easy it is to setup or use. But this isn't important to me. Its for the question).

    Read the article

  • When Googlebot sees a link, will it click it or navigate to it?

    - by FakeRainBrigand
    My site uses pushState and JSON data to display content. So, for example, this might appear on my page: <a href="/some/page">some page</a> The JavaScript then prevents the default action (following the link), and instead renders a view (using a different api, such as /getjson?some_page). $('[href]').click(function(){ history.pushState(...); handleURL(...); }); Assume my server will respond to requests at /some/page with a pre-rendered version. My questions are: will Googlebot receive the prerendered version, or allow JavaScript to instead invoke pushState, etc. if it doesn't make the direct request, will it wait for AJAX content to be loaded? does Googlebot implement pushState, so it will show the proper URL in search results?

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Blog not even ranking for exact title match, after domain has been dropped twice [on hold]

    - by Akshay Hallur
    Consider a blog, related to blogging and SEO. The domain has been dropped (expired) 2 times before acquisition. The current owner is the 3rd owner of the domain since 5 months. Blog posts are not ranking, even for exact titles. Google+ or other shares will show up instead of the content. Some blog posts are not even indexed. Let us TAKE that it gets around 7 organic visits / day. Dropped domain, less likely used for spam (WayBack machine (2 Reframed drops) 3 captures since 2004, Don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What could be the reason for this? How can Google be told that ownership is changed and the domain is now spam-free? Would this domain be salvageble, or does this only change after relocating to another domain?

    Read the article

  • Content API for Shopping Technical Webinar - April 3, 2012

    Content API for Shopping Technical Webinar - April 3, 2012 This webinar is for those interested in getting up and running with the Google Content API for Shopping without worrying about constructing XML or figuring out how to make an HTTP request in your language of choice. We'll show you how to leverage open source client libraries written by Google engineers so you can focus on the important stuff: your product data. We cover four basic topics: -Review of Existing Resources -Basic Primer on Using the API -Best Practices -Using a Client Library to Manage Product Data Feel free to follow along on the slides: google-content-api-tools.appspot.com From: GoogleDevelopers Views: 1112 16 ratings Time: 46:55 More in Science & Technology

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • To refund or not to refund this client?

    - by Mahalia Samuels
    I'd really appreciate your advice on an ongoing project. I presented my client with a proposal and design samples which he approved, and he paid in full instead of the 50% upfront deposit as I'd given him a generous discount. He was then slow in furnishing me with some of the content, but once we did, he expected the website to be finished immediately which was not possible. Because he needed it done urgently, we agreed to try to get it done about 10 working days after the content was provided, but the developer who was helping me let me down. The next week, I completed the website myself and uploaded it to the server on a Friday afternoon. He then calls and texts me on following Sunday while I'm at church to say it's not online (there was probably a problem with his browser). The next morning, I received an email from him demanding a full refund within two days because he couldn't see the website (even though it was live, and I tested it on multiple browsers, a different computer and my phone), and he called me shouting at me because he couldn't access it. Finally when he was able to access it, he was unhappy with a certain detail regarding the slideshow which I began fixing and which was done the next day. He then referred me to another website and said he wanted it to look similar but not identical to it in terms of the layout. He also now wanted to add more features which were not in the original design. I got a designer to work on a new design which I sent to him for review, which if approved would be completed by 15 October, and he approved it last Thursday. He then called me yesterday to say that he wanted to change the design - he only approved it out of impatience. He now wants the website to be more similar to the other website he referred me to and he wants it done before the 15th! Then, he says to me that other people have done websites for him in three days - website's he's complained to me about for lacking dimension because they were just premium themes, whereas we'd designed and coded from scratch. I'm thinking of finishing the website but refunding him in full (or at least the refundable 50%) less domain registration and other non-refundable amounts, just to avoid further escalation of this matter and having him call me next week and say he wants to change it again. These are the applicable terms and conditions as laid out in the agreement: Total amount due for this project is Amount A. Client shall pay Consultant a deposit of Amount B (50% of total amount due for project) in advance before any work commences on the Project. The balance is due within 7 working days of completion of project. Deposit is non-refundable. Should client opt to host elsewhere, applicable transferral fee of Amount C will apply. Estimated project completion time frame is 14 to 30 days from the date Client furnishes Consultant with Brief and all other required media and data, provided that Client has made payment to secure the project. Consultant will make every effort to meet agreed upon due dates. The Client should be aware that failure to submit required information or materials, or last minute changes and excessive changes may cause subsequent delays. Client delays could result in significant delays in delivery of finished work. Major changes in client input or direction or brief will be charged at normal rates. Any work the Client wishes Consultant to create, which is not specified in the attached Proposal will be considered an additional service. Client agrees to pay Consultant for any additional expenses or additional services not included in the attached quotation and proposal if requested by the Client. Web design credit in the name of the Consultant, and link to Consultant’s website shall be placed on the footer of the final Website. Either party may terminate this Agreement by giving 7 days written notice to the other of such termination. In the event that Work is postponed or terminated at the request of the Client, Consultant shall have the right to bill pro rata at full rates for work completed through the date of that request, while reserving all rights under this Agreement. If additional payment is due, this shall be payable within seven days of the Client's written notification to stop work. In the event of termination, the Client shall also pay any expenses incurred by Consultant and the Consultant shall own all rights to the Work. Advice please?

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • Group method parameter or individual parameter?

    - by Nassign
    I would like to ask on method parameters design consideration. I am usually deciding between using individual variables as parameters versus grouping them to a class or dictionary as one parameter. Is there such a rule when you should use individual parameter against using a class or a dictionary to group the parameter? Individual parameter - Straight forward, strongly typed Dictionary parameter - Very extensible, like HTTP request but cannot be strongly typed. Class parameter - Extensible by adding member to the class parameter, strongly typed. I am looking for a design reference on when to use which? Note: I am not sure if this question is valid in programmers but I definitely think it would be closed in stackoverflow, If it is still not valid, please point me to the proper page.

    Read the article

  • What are other ideologies to establish relationships between distinct users besides followers/following and friends?

    - by user784637
    Websites like myspace and facebook establish relationships between distinct users using the "friending" ideology, where one user sends a request to be accepted by another user in order for them to have the mutual permission to do stuff like post messages on each others walls. Less restrictive than the "friending" ideology, Twitter and instagram use the followers/following ideology where you can subscribe to the tweets or posts of another user without their permission. Less restrictive than the "followers/following" ideology, email and calling someone on the phone allows you to directly contact anyone. Are there other ideologies that have been successfully implemented either in social networking sites or other real world constructs to establish relations between users?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >