Search Results

Search found 21861 results on 875 pages for 'external library'.

Page 729/875 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • New Rails deployment half working...not sure why?

    - by Meltemi
    I'm in the final stages of going round trip through the entire Rails cycle: development - test - production (on an external server). I'm very close...but seeing some errors with the production version and don't know enough about Rails' "magic" to troubleshoot it yet... all seems well and I can load my app by going to www.mydomain.com/rails and it seems to work...I can interact with my app and create new objects in my database. However, when I go to www.mydomain.com/rails/ (difference is the trailing slash) i get a webpage that just says "Index from public" on it?!? I don't know where that's coming from? index.html is long removed from public. this may or may not be related to why I can't access, say, the first record in one of my classes by calling its controller like so: www.mydomain.com/rails/mycontroller/1 . and yes, there are records in the database. anyway, something's askew. it works fine on development box. but, only half working on production server. anyone know what might be causing this? this app is running on the server box (not development) Installed Passenger on server to serve app via Apache modified my httpd.conf with Passenger's stuff (headache but finally got it to respond) loaded schema into the production database: rake db:schema:load RAILS_ENV=production

    Read the article

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • What options exist to get text/ list items to appear in columns?

    - by alex
    I have a bunch of HTML markup coming from an external source, and it is mainly h3 elements and ul elements. I want to have the content flow in 3 columns. I know there is something coming in CSS3, but what options do I have to get this content to flow nicely into the 3 columns at the time being? I'm not that concerned about IE6 (as long as it degrades gracefully). Am I stuck using jQuery to parse the markup and chop it up into 3 divs which float? Thank you Update As per request, here is some of the HTML I am working with <h3>Tourism Industry</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> <h3>Small Business</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> And a whole lot more following the same format.

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Free utility which runs in Linux to create a UML class diagram from Java source files

    - by DeletedAccount
    I prefer to jot down UML-diagrams on paper and then implement them using Java. It would be nice to have a utility which could create UML-diagrams for me which I may share on-line and include in the digital documentation. In other words: I want to create UML diagrams from Java source code. The utility must be able to: Run in Linux. Handle Generics, i.e show List<Foo correctly in parameters and return type. Show class inheritance and interface implementations. It's nice if the utility is able to: Run in Windows and Mac OS X. Display enums in some nice manner. Generate output in a diagram format which I may modify using some other utility. Run from the command line. Restrict the UML generation to a set of packages which I may specify. Handle classes/interfaces which are not part of my source code. It could include the first class/interface which is external in the UML diagram. Perhaps in another color to indicate it being a library/framework created by someone else. Focuses on this task and doesn't try to solve the whole issue of documentation.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • MS Query returns data inside itself but does not export it to Excel

    - by kappa
    Hi, I'm having a strange problem with Excel and MS Query: I'm using MS Query to run a T-SQL query against a Microsoft SQL Server 2000 and return the results to Excel. To do this, I open Excel, go to Data - Import external data - New database query, select my data source, paste the SQL script in MS Query and click File - Return data to Microsoft Office Excel, leaving all the query options to their defaults. This works fine for many other Excel files, but this time although MS Query shows the correct data when I paste the SQL script, after returning to Excel all I get is the query name in the upper left cell, with no data returned. I fear the cause could be the SQL script, as it contains some advanced functions like union all, UDFs and variables. Here's the script: declare @date smalldatetime set @date = dateadd(day, datediff(day, 0, getdate()), 0) select [date], sum([hours]) as [hours] from ( select [date], [hours] from [server].[dbo].[udf] (84, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (89, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (93, '2010-01-01', @date) ) as [a] group by [date] order by [date] asc I can't get rid of the UDF as inside them are done advanced groupings involving cursors and temporary tables, nor I can remove the variable as the UDF won't accept dateadd(day, datediff(day, 0, getdate()), 0) as parameter. Any ideas? Thanks in advance, Andrea.

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • Node-webkit works on Mac, crashes and can't load module on Windows

    - by user756201
    I've created a full node-webkit app that works fine on the Mac OSX version of node-webkit. Everything works, it loads a key external nodeJS module (marked), and the world is good. However, when I try to run the app on the Windows version of Node-webkit as described in the Wiki, the app crashes immediately (in fact, it crashes immediately when I try all the options: dragging a folder onto nw.exe, dragging an app.nw compressed folder, and running both from the command line). The only thing that gets me closer is opening nw.exe and then pointing the node-webkit location bar to the index file. Then I get this error: Uncaught node.js Error Error: Cannot find module 'marked' I tried commenting out the code that requires marked: var marked = require('marked'); That returns the app to crashing immediately. I assumed it was because of context issues between node.js and the node-webkit browser, but those seem to not be at fault since I tried this suggestion to make sure it finds the correct file for the marked module...went right back to immediate crashing. I'm out of ideas because the crashes don't seem to leave me any way of knowing what the error was.

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Application.ProcessMessages hangs???

    - by X-Ray
    my single threaded delphi 2009 app (not quite yet complete) has started to have a problem with Application.ProcessMessages hanging. my app has a TTimer object that fires every 100 ms to poll an external device. i use Application.ProcessMessages to update the screen when something changes so the app is still responsive. one of these was in a grid OnMouseDown event. in there, it had an Application.ProcessMessages that essentially hung. removing that was no problem except that i soon discovered another Application.ProcessMessages that was also blocking. i think what may be happening to me is that the TTimer is--in the app mode i'm currently debugging--probably taking too long to complete. i have prevented the TTimer.OnTimer event hander from re-entering the same code (see below): procedure TfrmMeas.tmrCheckTimer(Sender: TObject); begin if m_CheckTimerBusy then exit; m_CheckTimerBusy:=true; try PollForAndShowMeasurements; finally m_CheckTimerBusy:=false; end; end; what places would it be a bad practice to call Application.ProcessMessages? OnPaint routines springs to mind as something that wouldn't make sense. any general recommendations? i am surprised to see this kind of problem arise at this point in the development!

    Read the article

  • Is private members hacking a defined behaviour ?

    - by ereOn
    Hi, Lets say I have the following class: class BritneySpears { public: int getValue() { return m_value; }; private: int m_value; }; Which is an external library (that I can't change). I obviously can't change the value of m_value, only read it. Even subclassing BritneySpears won't work. What if I define the following class: class AshtonKutcher { public: int getValue() { return m_value; }; public: int m_value; }; And then do: BritneySpears b; // Here comes the ugly hack AshtonKutcher* a = reinterpret_cast<AshtonKutcher*>(&b); a->m_value = 17; // Print out the value std::cout << b.getValue() << std::endl; I know this is a bad practice. But just for curiosity: is this guaranted to work ? Is it a defined behaviour ? Bonus question: Have you ever had to use such an ugly hack ? Thanks !

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Using Array Controllers to restrict the view in one popup depending on the selection in another. Not

    - by mjohnh
    I am working on an app that is not core data based - the data feed is a series of web services. Two arrays are created from the data feed. The first holds season data, each array object being an NSDictionary. Two of the NSDictionary entries hold the data to be displayed in the popup ('seasonName') and an id ('seasonID') that acts as a pointer (in an external table) by matches defined for that season. The second array is also a collection of NSDictionaries. Two of the entries hold the data to be displayed in the popup ('matchDescription') and the id ('matchSeasonId') that points to the seasonId defined in the NSDictionaries in first array. I have two NSPopUps. I want the first to display the season names and the second to display the matches defined for that season, depending on the selection in the first. I'm new at bindings, so excuse me if I've missed something obvious. I've tried using ArrayControllers as follows: SeasonsArrayController: content bound to appDelegate seasonsPopUpArrayData. seasonsPopup: content bound to SeasonsArrayController.arrangedObjects; content value bound to SeasonsArrayController.arrangedObjects.seasonName I see the season names fine. I can obviously follow a similar route to see the matches, but I then see them all, instead of restricting the list to the matches for the season highlighted. All the tutorials I can find seem to revolve around core data and utilise the relationships defined therein. I don't have that luxury here. Any help very gratefully received.

    Read the article

  • Why page_load is called twice in my web application?

    - by harisri786
    Hi, I have already gone through some of the posts in many websites regarding page_load being called twice but my problem is little bit different from those. My problem is with the landing page of my web application. Initially in my website page_load for the landing page was getting called twice every time when it is loaded. Since my application is an upgraded one (from VS 2003 to VS 2005/2008), I commented the "this.load" event in InitializeComponent. Now it works fine, when user first logs in, into my web application. But then, whenever user navigates to this page from any other page in my application, page_load gets called twice. Does anybody have any idea about why this could be happening. I tried to track the call stack for this, but VS 2008 was showing that this was getting called from external code. Also, I am using frames in my web application. I wonder if this problem has anything to do with frames. Any help is deeply appreciated. Regards, Hari

    Read the article

  • How to determine the size of a project (lines of code, function points, other)

    - by sixtyfootersdude
    How would you evaluate project size? Part A: Before you start a project. Part B: For a complete project. I am interested in comparing unrelated projects. Here are some options: 1) Lines of code. I know that this is not a good metric of productivity but is this a reasonable measure of project size? If I wanted to estimate how long it would take to recreate a project would this be a reasonable way to do it? How many lines of code should I estimate a day? 2) Function Points. Functions points are defined as the number of: inputs outputs inquires internal files external interfaces Anyone have a veiw point on whether this is a good measure? Is there a way to **actually do this? Does anyone have another solution? Hours taken seems like it could be a useful metric but not solely. If I ask you what is a "bigger program" and give you two programs how would you approach the question? I have seen several discussions of this on stackover flow but most discuss how to measure programmer productivity. I am more interested in project size.

    Read the article

  • submitting the form even if the form is in invalid state

    - by Abu Hamzah
    i am using asp.net web forms and i am using bassistance.de jquery validation.. i have bunch of fields in the form and its validatating how it suppose to but its still executing my code behind code, why is it doing that? here is my code: form: <script type="text/javascript"> $(document).ready(function() { $("#aspnetForm").validate({ rules: { <%=txtVisitName.UniqueID %>: { maxlength:1, //minlength: 12, required: true } ................. }, messages: { <%=txtVisitName.UniqueID %>: { required: "Enter visit name", minlength: jQuery.format("Enter at least {0} characters.") } ............ }, success: function(label) { label.html("&nbsp;").addClass("checked"); } }); $("#aspnetForm").validate(); }); </script> <div > <h1> Page</h1> <asp:Label runat="server" ID='Label1'>Visit Name:</asp:Label> <asp:TextBox ID="txtVisitName" runat='server'></asp:TextBox> ............ ............... <button id="btnSubmit" name="btnSubmit" type="submit"> Submit</button> </div> and i have a external .js file referencing to my web form page. $(document).ready(function() { $("#btnSubmit").click(function() { SavePage(); }); }); i have a WebMethod .cs and i have bookmark and it does hitting that line of code even thoug my page is in invalid state. how would i fix this? thanks.

    Read the article

  • Botan linking error on Windows MSVC

    - by Jake Petroules
    I am trying to compile a library linking to the version of Botan from the Qt Creator sources with MSVC 2008 but am receiving the following error. MinGW compiles and links it fine. What is the issue? databasecrypto.obj:-1: error: LNK2019: unresolved external symbol "public: static unsigned int const Botan::Pipe::DEFAULT_MESSAGE" (?DEFAULT_MESSAGE@Pipe@Botan@@2IB) referenced in function "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl DatabaseCrypto::b64_encode(class Botan::SecureVector<unsigned char> const &)" (?b64_encode@DatabaseCrypto@@CA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV?$SecureVector@E@Botan@@@Z) /*! Encodes the Botan byte array \a in as a base 64 string. \param in The Botan byte array to encode. */ std::string DatabaseCrypto::b64_encode(const SecureVector<Botan::byte> &in) { Pipe pipe(new Base64_Encoder); pipe.process_msg(in); return pipe.read_all_as_string(); // <-- default parameter here is Botan::Pipe::DEFAULT_MESSAGE }

    Read the article

  • Error in Getting Youtube Video Title, Description and thumbnail

    - by Muhammad Ayyaz Zafar
    I was getting youtube title and youtube description form the same code but now its not working I am getting following error: Warning: DOMDocument::load() [domdocument.load]: http:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 Warning: DOMDocument::load(http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY) [domdocument.load]: failed to open stream: no suitable wrapper could be found in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 Warning: DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY" in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 .................................... Following Coding is used to get Youtube Video Data: $url = "http://gdata.youtube.com/feeds/api/videos/".$embedCodeParts2[0]; $doc = new DOMDocument; @$doc->load($url); $title = $doc->getElementsByTagName("title")->item(0)->nodeValue; $videoDescription = $doc->getElementsByTagName("description")->item(0)->nodeValue; It was working before (This coding is working fine in Local server but on internet its not working) but now its not working. Please guide me how to fix this error. Thanks for your time.

    Read the article

  • c++ Mixing printf with wprintf (or cout with wcout)

    - by Bo Jensen
    I know you should not mix printing with printf,cout and wprintf,wcout, but have a hard time finding a good answer why and if it is possible to get round it. The problem is I use a external library that prints with printf and my own uses wcout. If I do a simple example it works fine, but from my full application it simply does not print the printf statements. If this is really a limitation, then there would be many libraries out there which can not work together with wide printing applications. Any insight on this is more than welcome. Update : I boiled it down to : #include <stdio.h> #include <stdlib.h> #include <iostream> #include <readline/readline.h> #include <readline/history.h> int main() { char *buf; std::wcout << std::endl; /* ADDING THIS LINE MAKES PRINTF VANISH!!! */ rl_bind_key('\t',rl_abort);//disable auto-complete while((buf = readline("my-command : "))!=NULL) { if (strcmp(buf,"quit")==0) break; std::wcout<<buf<< std::endl; if (buf[0]!=0) add_history(buf); } free(buf); return 0; } So I guess it might be a flushing problem, but it still looks strange to me, I have to check up on it.

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >