Search Results

Search found 52102 results on 2085 pages for 'steam web api'.

Page 221/2085 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Installing Exchange 2013 CU1

    - by marc dekeyser
    Originally posted on: http://geekswithblogs.net/marcde/archive/2013/08/01/installing-exchange-2013-cu1.aspxBefore you begin Download the following software: · UCMA 4.0: http://www.microsoft.com/en-us/download/details.aspx?id=34992 · Office 2010 filter packs 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=17062 · Office 2010 filter packs SP1 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=26604 Prerequisite installation Step 1 : Open Windows Powershell     Step 2: Enter following string to start prerequisite installation for a multirole server – Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation   Step 3: restart the server   Shutdown.exe /r /t 60     Step 4: Install the UCMA Runtime Setup Navigate to the folder holding the prerequisite downloads and double click the “UCMARunTimeSetup”     Step 5: Accept the Run prompt     Step 6: Click the left click on "Next (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 7: Left click on "I have read and accept the license terms. (check box)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 8: Left click on "Install (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 9: Left click on "Finish (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 10: Start the Office 2010 filter pack installation     Step 11: Left click on "Run (button)" in "Open File - Security Warning"     Step 12: Left click on "Microsoft Filter Pack 2.0 (button)" as it hides in the background by default.     Step 13: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 14: Left click on "I accept the terms in the License Agreement (check box)" in "Microsoft Filter Pack 2.0"     Step 15: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 16: Left click on "OK (button)" in "Microsoft Filter Pack 2.0"     Step 17: Start the installation of the Office 2010 Filterpack SP1.     Step 18: Left click on "Run (button)" in "Open File - Security Warning"     Step 19: Left click on "Click here to accept the Microsoft Software License Terms. (check box)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 20: Left click on "Continue (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 21: (?21/?06/?2013 11:23:25) User left click on "OK (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 22: Left click on "Windows PowerShell (button)"     Step 23: restart the server. Shutdown.exe /r /t 60   Step 24: Left click on "Close (button)" in "You're about to be signed off"     Installing Exchange server 2013 Step 1: Navigate to the Exchange 2013 CU1 extracted location and run setup.exe Left click on "next (button)" in "Exchange Server Setup" Step 2: Left click on "next (button)" in "Exchange Server Setup" Step 3: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" Step 4: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" a Step 5: User left click on "next (button)" in "Exchange Server Setup" Step 6: Left click on "I accept the terms in the license agreement" in "Exchange Server Setup" Step 7: Left click on "next (button)" in "Exchange Server Setup" Step 8: Left click on "next (button)" in "Exchange Server Setup" Step 9: Select "Mailbox role” in "Exchange Server Setup" Step 10: Select "Client Access role" in "Exchange Server Setup" Step 11: Left click on "next (button)" in "Exchange Server Setup" Step 12: Left click on "next (button)" in "Exchange Server Setup" Step 13: Choose the installation path and left click on "next (button)" in "Exchange Server Setup" Step 14: Leave malware scanning on by making sure the radio button is on “No”and left click on "Exchange Server Setup (window)" in "Exchange Server Setup"                   Step 15: Left click on "finish (button)" in "Exchange Server Setup" Step 16: Restart the server. Shutdown.exe /r /t 60

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • jQuery getJson returning null

    - by Adam
    I'm trying to use this api that lets you reference an exact text, but the getJson does not seem to be working, it's just returning null. $.getJSON('http://api.biblia.com/v1/bible/content/KJV.json?key=MYAPIKEY=John+3:16-18&style=bibleTextOnly', function(data) { alert(data); }); I just took the key out, i've been testing it with my real api key, and it works fine when i just visit the url. is there anything else i need to do to make it work? This is what you get from the url when you have an api key in the url: {"text":"For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life. For God sent not his Son into the world to condemn the world; but that the world through him might be saved. He that believeth on him is not condemned: but he that believeth not is condemned already, because he hath not believed in the name of the only begotten Son of God."}

    Read the article

  • simple web parts in asp.net show as blank page

    - by Javaman59
    I am trying to develop web parts in VS 2008/WinXP I created a Web Site project, and added a couple of web parts within the default form in default.aspx <form id="form1" runat="server"> <div> <asp:WebPartManager ID="WebPartManager1" runat="server"> </asp:WebPartManager> <asp:WebPartZone ID="WebPartZone1" runat="server"> </asp:WebPartZone> </div> </form> When I first ran it (in the debugger), a popup told to me enable Windows authentication in IIS (so something is working!). I enabled the Windows authentication, and now when I run it I get a blank screen. Same result if I open it in IE via the url (rather than debugger). Following is the the source view. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title> </title><style type="text/css"> .WebPartZone1_0 { border-color:Black;border-width:1px;border-style:Solid; } </style></head> <body> <form name="form1" method="post" action="Default.aspx" id="form1"> <div> <input type="hidden" name="__WPPS" id="__WPPS" value="s" /> <input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET" value="" /> <input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT" value="" /> <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwULLTEzNTQyOTkwNDZkZEAVY0VcQaHLv3uaF3svWgCOfsmb" /> </div> <script type="text/javascript"> //<![CDATA[ var theForm = document.forms['form1']; if (!theForm) { theForm = document.form1; } function __doPostBack(eventTarget, eventArgument) { if (!theForm.onsubmit || (theForm.onsubmit() != false)) { theForm.__EVENTTARGET.value = eventTarget; theForm.__EVENTARGUMENT.value = eventArgument; theForm.submit(); } } //]]> </script> <script src="/WebPartsSite/WebResource.axd?d=4lwrtwXryJ3Ri-GXAxZR4g2&amp;t=634003643420884071" type="text/javascript"></script> <div> <table cellspacing="0" cellpadding="0" border="0" id="WebPartZone1"> <tr> <td style="height:100%;"><table cellspacing="0" cellpadding="2" border="0" style="width:100%;height:100%;"> <tr> <td style="height:100%;"></td> </tr> </table></td> </tr> </table> </div> <script type="text/javascript"> //<![CDATA[ var __wpmExportWarning='This Web Part Page has been personalized. As a result, one or more Web Part properties may contain confidential information. Make sure the properties contain information that is safe for others to read. After exporting this Web Part, view properties in the Web Part description file (.WebPart) by using a text editor such as Microsoft Notepad.';var __wpmCloseProviderWarning='You are about to close this Web Part. It is currently providing data to other Web Parts, and these connections will be deleted if this Web Part is closed. To close this Web Part, click OK. To keep this Web Part, click Cancel.';var __wpmDeleteWarning='You are about to permanently delete this Web Part. Are you sure you want to do this? To delete this Web Part, click OK. To keep this Web Part, click Cancel.';//]]> </script> <script type="text/javascript"> __wpm = new WebPartManager(); __wpm.overlayContainerElement = null; __wpm.personalizationScopeShared = true; var zoneElement; var zoneObject; zoneElement = document.getElementById('WebPartZone1'); if (zoneElement != null) { zoneObject = __wpm.AddZone(zoneElement, 'WebPartZone1', true, false, 'black'); } </script> </form> </body> </html>

    Read the article

  • How do you include additional files using VS2010 web deployment packages?

    - by Jason
    I am testing out using the new web packaging functionality in visual studio 2010 and came across a situation where I use a pre-build event to copy required .dll's into my bin folder that my app relies on for API calls. They cannot be included as a reference since they are not COM dlls that can be used with interop. When I build my deployment package those files are excluded when I choose the option to only include the files required to run the app. Is there a way to configure the deployment settings to include these files? I have had no luck finding any good documentation on this.

    Read the article

  • (interactive) graph as in graph theory on a web page ?

    - by LB
    Hi, I have to integrate a graph with nodes and edges on a web page. Ideally, i would like to be able to interact with it (like moving the nodes around). Actually, i'm beginning by representing trees, so i would appreciate to be able to collapse subtrees. How can I do that ? I was considering google-visualization api but i wasn't able to find the kind of visualization i'm looking for (org chart doesn't allow to have multiple fathers, if i understood well) I've got no idea of the kind of technology so my tagging may not be really accurate :-). thanks

    Read the article

  • |Ideas for applications using face detection and recognition

    - by Omry
    Full disclosure: I work at face.com. Face.com just launched a free (up to an hourly limit) face detection and recognition REST API. We got a very handy API sandbox that developers can use to play the API and to see what it can and can't do. Besides the obvious point of letting you guys know about the API, I wanted to hear from you what kind of applications you think can be developed with it. Some pretty obvious ideas: Face based login (not entirely secure but still fun). Automatic face crop for sites that let users upload photos (dating sites etc) Some kind of integration into augmented reality games There is no right or wrong answers here, use your imagination :).

    Read the article

  • Defining a variable in a class and using it in functions

    - by Josh
    I am trying to learn PHP classes so I can begin coding more OOP projects. To help me learn I am building a class that uses the Rapidshare API. Here's my class: <?php class RS { public $baseUrl = 'http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub='; function apiCall($params) { echo $baseUrl; } } ?> $params will contain a set of key pair values, like this: $params = array( 'sub' => 'listfiles_v1', 'type' => 'prem', 'login' => '746625', 'password' => 'not_my_real_pass', 'realfolder' => '0', 'fields' => 'filename,downloads,size', ); Which will later be appended to $baseUrl to make the final request URL, but I can't get $baseUrl to appear in my apiCall() method. I have tried the following: var $baseUrl = 'http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub='; $baseUrl = 'http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub='; private $baseUrl = 'http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub='; And even tried $this->baseUrl = $baseUrl; in my apiCall() methid, I don't know what the hell I was thinking there though lol. Any help is appreciated thanks :)

    Read the article

  • using moogaloop to embed a custom video player from Vimeo

    - by scullytr
    Anyone have any luck using Vimeo's moogaloop player? I'm wanting to use Vimeo's supposed API functions to create custom buttons to control the Vimeo player on my site. Here's the reference page for moogaloop: http://vimeo.com/api/docs/moogaloop I've been able to get the player to embed using SWFObject, but I can't seem to get the API functions to work (e.g. api_play()). Any help is greatly appreciated. Thanks! -Tim.

    Read the article

  • How do I check if a user is a fan of my facebook page on my website?

    - by Tony
    I want to check if my users are fans of my facebook page. I think something like this should do it: <script type="text/javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US"></script> <script type="text/javascript">FB.init("my api key","xd_receiver.htm");</script> <script type="text/javascript"> //<![CDATA[ FB_RequireFeatures(["Api"], function(){ var api = FB.Facebook.apiClient; api.pages_isFan(PAGE_ID,gigyaUser.FACEBOOK_USER_ID,function(response){ alert(response); }); }); //]]> </script> However, for some reason I keep getting null even though the user is in fact a fan of the page. Am I missing something?

    Read the article

  • Posting to an Open Graph page

    - by jpoz
    Hello all, I've got a open graph page with a like button, activity feed, etc. I want to be able to post programmatically. The open graph docs says it's possible via the old stream.publish api. I've got a meta tag pointing to my facebook application, but the stream publish api doesn't seem to be able to post. This is what i'm posting via the api: { message='Hello facebook', target_id='ID via curl 'https://graph.facebook.com/?ids', 'uid'='application_id' } I keep getting: "error_code":101,"error_msg":"Invalid API key" What am I missing? Thanks, JP

    Read the article

  • jquery ajax vs browser url

    - by danwoods
    Hello all, I'm trying to use youtube's api to bring back a listing of a user's videos. The request url looks something like: http://gdata.youtube.com/feeds/api/users/username/uploads with 'username' being the correct username. This bring back the appropriate url in the browser. However when I try to access that url via jQuery's $.ajax or $.get functions, using something like: $.ajax({ //set parameters url: "http://gdata.youtube.com/feeds/api/users/username/uploads", type: "GET", //on success success: function (data) { alert("xml successfully captured\n\n" + data); }, //on error error:function (XMLHttpRequest, textStatus, errorThrown, data){ alert(" We're sorry, there seem to be a problem with our connection to youtube.\nYou can access all our videos here: http://www.youtube.com/user/username"); alert(data); } }); $.get("http://gdata.youtube.com/feeds/api/users/username/uploads", function(data){ alert("Data Loaded: " + data); }); I get an empty document returned. Any ideas why this is?

    Read the article

  • Application stopped unexpectedly at Luanch

    - by Chris Stryker
    I've run this on a device and on the emulator. The app stops unexpectedly on both I have not a clue what is wrong currently It uses Google API Maps I compiled with Google Api 7 Followed this tutorial http://developer.android.com/guide/tutorials/views/hello-mapview.html (Made some alterations clearly) I did use the correct API Key That the final apk is signed with This is the source(If you compile it shouldnt work as it is unsigned) This is the compiled signed apk

    Read the article

  • Entity framework self referencing loop detected

    - by Lyd0n
    I have a strange error. I'm experimenting with a .NET 4.5 Web API, Entity Framework and MS SQL Server. I've already created the database and set up the correct primary and foreign keys and relationships. I've created a .edmx model and imported two tables: Employee and Department. A department can have many employees and this relationship exists. I created a new controller called EmployeeController using the scaffolding options to create an API controller with read/write actions using Entity Framework. In the wizard, selected Employee as the model and the correct entity for the data context. The method that is created looks like this: // GET api/Employee public IEnumerable<Employee> GetEmployees() { var employees = db.Employees.Include(e => e.Department); return employees.AsEnumerable(); } When I call my API via /api/Employee, I get this error: ...The 'ObjectContent`1' type failed to serialize the response body for content type 'application/json; ...System.InvalidOperationException","StackTrace":null,"InnerException":{"Message":"An error has occurred.","ExceptionMessage":"Self referencing loop detected with type 'System.Data.Entity.DynamicProxies.Employee_5D80AD978BC68A1D8BD675852F94E8B550F4CB150ADB8649E8998B7F95422552'. Path '[0].Department.Employees'.","ExceptionType":"Newtonsoft.Json.JsonSerializationException","StackTrace":" ... Why is it self referencing [0].Department.Employees? That doesn't make a whole lot of sense. I would expect this to happen if I had circular referencing in my database but this is a very simple example. What could be going wrong?

    Read the article

  • MVC route with id and sub-action

    - by Dan Revell
    I can't figure out what I need to do with MVC routing to make this work Here's my one route: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "{controller}/{id}/{action}", defaults: new { id = RouteParameter.Optional, action = RouteParameter.Optional } ); The request /Shipments/ works great. The request /Shipments/3/Packages works great. The request /Shipments/3 however fails with the error: Multiple actions were found that match the request: System.Linq.IQueryable`1[Api.Controllers.RequisitionsController+PackageRequisitionWithTracking] GetPackageRequisitions(Int32) on type Api.Controllers.RequisitionsController Api.Models.ShipmentRequisition GetShipmentRequisitions(Int32) on type Api.Controllers.RequisitionsController It can't seem to differentiate between: public ShipmentRequisition GetShipmentRequisitions(int id) and [ActionName("Packages")] public IQueryable<PackageRequisitionWithTracking> GetPackageRequisitions(int id) I would have thought the lack of action name on the get shipment by id would allow that route to work.

    Read the article

  • Web server connection to SQL Server: Response Packet [Malformed Packet]

    - by John Murdoch
    I am seeing very, very sluggish performance between my web server (which handles HTTP web services connections) and a separate server running Microsoft SQL Server 2008. I have been capturing packet traffic on the web server trying to understand why things are running so slowly. I am using Wireshark to capture the packet traffic. The apparent problem is that the web server is sending TDS packets to the data server--each packet followed by a response from the data server with Response Packet [Malformed Packet] in the Info field. The packet sent from the web server appears to have an invalid checksum. Has anyone seen this type of problem before? Any ideas?

    Read the article

  • Picasa “sync to web” folders do not stay in sync

    - by jmbouffard
    I have multiple folders from the Picasa software that were moved to my Google+ profile using the "sync to web" feature. At some point I noticed that some of these folders were on Google+ and had the green upload arrow in Picasa, however the "sync to web" option was disabled. I tried to re-enable the "sync to web" option and it starts loading but after a few seconds the "sync to web" option is back to disabled. I get no error message, it just switches back to disable. Is there any way I could leave the option enabled? Out of around 80 something albums synced to Google+ only 3-4 are not able to have the "sync to web" option enabled. Thanks.

    Read the article

  • Running scheduled web scripts in a Windows Server environment

    - by Dan Murfitt
    I'm trying to get a scheduled web script running on a Windows Server and so far the only way I've managed to automate this process is by using the Task Scheduler to open Internet Explorer with the web address as a parameter. I then need to create a separate task to run just after this task to close Internet Explorer (otherwise the task doesn't complete). Is there a better way of doing this? I've also managed to run the script by calling the web address through a Telnet connection to the web server (GET /web/address/here) but I haven't found a way of automating this process on a scheduled basis. Any ideas appreciated

    Read the article

  • What are the things to look out for when adding other domain user to your IIS?

    - by Jack
    I have a Windows Server 2008 R2 called Jack11 that joined a domain called Watson.org. It has IIS 7 installed. From my understanding, we need to add the following into the web.config file <system.web> <identity impersonate="true" /> </system.web> Also, we need to ensure that the server Jack11 can ping to the domain Watson.org. What other setting do we need to setup in order for a user of domain Watson.org (e.g. the user Watson\User1 to access the application in the Server IIS? This is because currently, there is a problem as follows: Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'WATSON\User1'. The error message was displayed when the user User1 wish to access one of the web application in server Jack11 IIS and that web application also do some retrieval of records from the database, which is installed in SQL Server 2008 Enterprise located in the same server Jack11.

    Read the article

  • File ownership and permissions on web site PHP files

    - by columbo
    Hello, I am learning the basics of linux servers so I am green. I have an Ubuntu server upon which there are websites that I have inherited. In a fit of security worry I decided to check out the ownership of the web site files. They are all 2016:sites. If I run the command 'cat /etc/group | more' I can see that the group exists. But when I run 'lastlog' the user 2016 does not appear. I started to worry that 2016 might be the username of web users connecting from the web so I set the permissions on a testfile to chmod 600, giving read permissions to only the file owner. Sure enough I could still access the file from the web. Can anyone suggest what is going on here? I tried creating a new user and giving them file ownership but then when I access the file from the web it wants me to have all directories up stream owned by the same person. Thanks

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >