Search Results

Search found 124254 results on 4971 pages for 'ubuntu one api'.

Page 251/4971 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • Is it possible to boot Windows 7 from when you're harddrive's partition with two OSes?

    - by Muhammad
    I have a PC with a hard drive that's partition into home directories for Windows 7 and Ubuntu. I primarily use Windows 7 and occasionally (once a week) use Ubuntu. When I boot up my computer, I usually get taken to a boot menu that includes about 5 different options: 3 are for Ubuntu's configurations, one's for swap, and the forth is for Windows 7. Then after I select Windows 7 or Ubuntu from this menu, I get taken to another menu that again asks me for Windows 7 or Ubuntu. This time, there's only 2 options, Windows 7 and Ubuntu. [Side note: out of experience I realized most boot menus are timed and so are these.] So if I ever turn my computer on without actually sitting in front of it for a few minutes, it boots into Ubuntu. I'm trying to figure out what I need to do so I can first get rid of the 2 boot menus. And if possible, I'm looking for help changing my boot options where I can load up Windows 7 (even with the boot menu wait of about 30 seconds). My harddrive's partition's laid out like this: Windows 7 (C partition) Multimedia (D partition, I just use this for backup/non-OS stuff) Ubuntu (home directory) Swap Is there any other information I need to provide?

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • Can't get intel atom g-500 video driver to work with ubuntu 10.10 netbook edition.

    - by Matthew
    First of all I am completely new to Linux, so if you respond, please do so in a 'linux for dummies' tone so that my brain will be able to process it. I recently installed ubuntu on my dell mini-inspiron 1010. It has one GB of ram and an intel atom processor that uses the intel 500 graphic accelerator driver for windows and can run 1024x768 comfortably in xp. When I was installing ubuntu had quite a bit of trouble with my display and I am still unable to adjust my settings from 800x600x0x0 and there is no hardware acceleration. I visited the intel site and installed the linux drivers with the help of a friend but still no change. I tried adding resolution settings through xconf but they could not be applied even after I added the values. I am probably going about this totally wrong, but I've spent quite a lot of time browsing through forums and still haven't found a solution. Any help would be greatly appreciated. Also any other beginner tips that you have would be much appreciated. Thanks in advance, Matt

    Read the article

  • Error when installing wubi on windows 7

    - by P'sao
    Im installing ubuntu on windows 7(wubi 11.10): when its nearly done it gives me this error in the log file: Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] 10-25 20:31 DEBUG TaskList: # Cancelling tasklist 10-25 20:31 DEBUG TaskList: # Finished tasklist 10-25 20:31 ERROR root: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] can some one help me?

    Read the article

  • How to determine the root cause of a system lockup on Ubuntu 8.04 LTS?

    - by jdt141
    I'm currently working a project that involves setting up a PC/104 stack and running Ubuntu 8.04 LTS. We need to use the PC/104 stack because its an embedded application - and we're required to use a DeviceNet peripheral card to communicate to other devices. (DeviceNet is just a protocol on top of CAN.) Anyway, the following hardware is on the stack: Kontron MOPSPM104 with a 1GHz Intel Celeron processor ConnectTech FlashDrive/104 4GB Industrial Temp (-40 to +85 C) Woodhead (Molex) PC104DVNIO DeviceNet card A run of the mill 104 power supply The Kontron Board offers two serial ports, one VGA out, and two USB ports. The DeviceNet card is an ISA card. Because of this (per the User's Guide for the Kontron Board), I have manually set the IRQs in the BIOS to be appropriately configured, and turned off ACPI in both the BIOS and passed the appropriate flag in GRUB. I've installed Ubuntu 8.04 desktop, 32 bit. The problem that I'm having is that, from time to time, the entire 104 stack locks up. This only seems to happen in two cases, both of which we're running GNOME. We have a custom application that uses the DeviceNet card, and the system will lock up, or (more frequently) when we're running Firefox and either surfing for some information or trying to test it - typically by streaming video from a IP-camera. The reason I ask this questions is I cannot determine the root cause of this lockup. The IRQs appear to correctly configured in the BIOS and as the Kernel sees them, and nothing is logged to dmesg. If you all could help me determine the root cause of this lockup, I would greatly appreciate it. Thanks.

    Read the article

  • Installing 64-bit Ubuntu Server 12.04 LTS, on a VM with VMWare Player, on a 64-bit Windows 7 PC

    - by WannaBeAGeek
    I'm trying to create a VM, using VMWare Player, with an ISO image of Ubuntu Server 12.04 (LTS). The machine I'm doing the installation on has an Intel(R) Core(TM) i5 CPU, and runs 64-bit Windows 7 I managed to create the VM (gave username, password, configured network etc), but I can't install Ubuntu Server. First I get this alert : Binary translation is incompatible with long mode on this platform. Disabling long mode. Without long mode support, the virtual machine will not be able to run 64-bit code. For more details see http://vmware.com/info?id=152. When I click OK, I get another alert : This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host supports Intel VT-x, but Intel VT-x is disabled. Intel VT-x might be disabled if it has been disabled in the BIOS/firmware settings or the host has not been power-cycled since changing this setting. (1) Verify that the BIOS/firmware settings enable Intel VT-x and disable 'trusted execution.' (2) Power-cycle the host if either of these BIOS/firmware settings have been changed. (3) Power-cycle the host if you have not done so since installing VMware Player. (4) Update the host's BIOS/firmware to the latest version. For more detailed information, see http://vmware.com/info?id=152. Then, when I click OK, my VM exists, and I get back to the VMWare Player home screen. I don't know much about hardware and virtualisation, so there might be some necessary info I'm not giving. Please don't hesitate to let me know what is missing in my post, for finding solutions. Thanks :)

    Read the article

  • What's the mysql-5.5 compilation configuration arguments on Ubuntu 10.04?

    - by photon
    I want to install mysql 5.5 on my Ubuntu10.04 desktop system. But I'm not sure what arguments I should use after the cmake command. Though I've seen these articles: https://wikis.oracle.com/display/mysql/Cmake Building mysql-5.5.19 from source on ubuntu 11.10 with the static flag Compile MySQL 5.5.15 from source using autorun.sh and cmake, unable to start MySQL after Would anyone like to share the mysql-5.5 configuration arguments of compilation on Ubuntu 10.04? $cmake # what arguments to enter for this command update: cmake . -DBUILD_CONFIG=mysql_release -DCMAKE_INSTALL_PREFIX=/path/to/mysql_installation_dir -DWITH_SSL=no the official web site says it need to use cmake to compile the source package, but according to a teck blog, it doesn't need to compile the source, so which one is correct? When I use Cmake, I also had following error message: $ sudo cmake . -DBUILD_CONFIG=mysql_release -DCMAKE_INSTALL_PREFIX=/usr/local/mysql_community_5.5 -- The CXX compiler identification is unknown CMake Error: your CXX compiler: "CMAKE_CXX_COMPILER-NOTFOUND" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. CMake Error at cmake/build_configurations/mysql_release.cmake:126 (MESSAGE): Clarification: I'm not clever and I'm a slow-thinking guy. And I cannot find a clever guy around me to give me some useful help. So I come here and hope someone is kind and generous enough to take the time to post the details.

    Read the article

  • ubuntu usb that can be booted on mac and windows?

    - by An Original Alias
    I am a little frustrated having to use to separate USBs for booting on mac and windows. Is there a way that I can have the version for mac and the version for windows on one USB? or even better, one version that can be booted on both? I have heard of multibootiso, but I'm not entirely sure that it will work on imac. If needed, I am willing to use terminal to make this happen, even if its a long complicated process.

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • How do you enable webcam support in facebook for ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated! Cheers!

    Read the article

  • On Ubuntu get: "-bash: ./flume No such file or directory" BUT flume is there and executable. Same binary OK on RHEL

    - by lcbrevard
    This is already posted in serverfault - and may be more apprpriate there. Reworked a bit from the orginal posting. We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.] The name of the main binary executable is 'flume' Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get: -bash: ./flume: No such file or directory 'file flume' shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network) If put into /usr/local/bin, then 'which flume' returns: /usr/local/bin/flume The file is marked as executable (did 'chmod +x flume') and lsattr shows no problems with attribute bits. I was not able to try 'ldd flume' yet. I have also not tried 'strace flume'. Currently I am with an air conditioning failure. [It's been that kind of week!] I now suspect that some library is not there. This is a profoundly unhelpful message and one I have never seen before. Is this peculiar to Ubuntu or perhaps just to this installation. We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Great Free Courses on Building HTML5 apps using ASP.NET Web API, Knockout.js and jQuery

    - by ScottGu
    Pluralsight has developed some great training courses on the new .NET 4.5 and VS 2012 release, including two fantastic courses from John Papa that cover how to build HTML5 web apps using ASP.NET Web API, Knockout and jQuery: Single Page Apps with HTML5, Web API, Knockout and jQuery Building HTML5 and JavaScript Apps with MVVM and Knockout Free 1-Month Subscription to the Courses Pluralsight is offering a special promotion that allows you to get a free 1-month subscription to watch the above courses at no cost.  There is no obligation to buy anything at the end of the offer and you don’t need to supply a credit card in order to take part in it. To get access to the course you simply follow @pluralsight and @john_papa on Twitter and then visit this page and enter your Twitter name using the form on it.  Pluralsight will then send you a private twitter message containing the access code that you can use to subscribe to the courses (and download the course exercise files).  Once you are subscribed to the course you have one month to watch the course (and you can watch it as many times as you want during the month). Pluralsight is running the promotion through Sept 18th – so sign-up now to get access.  Once you are signed up you then have a month to watch the course. Hope this helps, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • Using Coherence API to get POF bytes

    - by Bruno.Borges
    Someone raised the question on how to use the Coherence API to get the bytes of an object in POF (Portable Object Format) programatically. So I came up with this small code that shows the very cool API simple usage :-)   SimplePofContext spc = new SimplePofContext();    spc.registerUserType(0, User.class, new UserSerializer());    // consider UserSerializer as an implementation of PofSerializer            User u = new User();    u.setId(21);    u.setName("Some Name");    u.setEmail("[email protected]");            ByteArrayOutputStream baos = new ByteArrayOutputStream();    DataOutput dataOutput = new DataOutputStream(baos);    BufferOutput bufferOutput = new WrapperBufferOutput(dataOutput);    spc.serialize(bufferOutput, u);            byte[] byteArray = baos.toByteArray();    System.out.println(Arrays.toString(byteArray));  Easy, isn't?

    Read the article

  • API Class with intensive network requests

    - by Marco Acierno
    I'm working an API which works as "intermediary" between a REST API and the developer. In this way, when the programmer do something like this: User user = client.getUser(nickname); it will execute a network request to download from the service the data about the user and then the programmer can use the data by doing things like user.getLocation(); user.getDisplayName(); and so on. Now there are some methods like getFollowers() which execute another network request and i could do it in two ways: Download all the data in the getUser method (and not only the most important) but in this way the request time could be very long since it should execute the request to various urls Download the data when the user calls the method, it looks like the best way and to improve it i could cache the result so the next call to getFollowers returns immediately with the data already download instead of execute again the request. What is the best way? And i should let methods like getUser and getFollowers stop the code execution until the data is ready or i should implement a callback so when the data is ready the callback gets fired? (this looks like Javascript)

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • Google Currency Convertor JSON API

    - by Gopinath
    There are many live currency conversion services available on the web and the popular one’s among them are – Google, Yahoo, MSN & XE. Among all these four Google is the developer’s darling and it provides a simple JSON API that can be integrated in your applications.  http://www.google.com/ig/calculator?hl=en&q=1USD=?INR Using the API is very simple and it takes two parameters as input. The first parameter “hl” is the language code in which you want output. The second parameter “q” is the conversion query in the format <number><from currency code>=?<to currency code>. In the URL give above the query requests for conversion of 1 USD in INR. JSON output for the above query would be  similar to {lhs: "1 U.S. dollar",rhs: "54.4602984 Indian rupees",error: "",icc: true} Examples: 100 USD in INR  http://www.google.com/ig/calculator?hl=en&q=100USD=?INR Example 2: 1 GBP in INR http://www.google.com/ig/calculator?hl=en&q=1GBP=?INR Example 3: 1 USD in INR, output the data in French language http://www.google.com/ig/calculator?hl=fr&q=1USD=?INR   This is an undocumented service and expect changes at any time. But as long as it works, you got a programmatic way to convert currencies.

    Read the article

  • What are the potential problems with exposing the Facebook API secret?

    - by genehack
    I'm writing a little web utility that posts status updates to Twitter and/or Facebook. That involved creating 'applications' with both those services in order to get API keys and 'secrets'. My question is how protected I really need to keep those secrets -- in order for this to work at all, you seem to need the secret to interact with the authentication part of the service to grant the app access to your account and/or grant it permission to post updates on your behalf. Facebook's documentation says to protect the secret, but at least one other Facebook utility distributes the API key and secret in the source. It's important to note: this isn't your standard Facebook 'application' that runs within the context of Facebook, nor is it a standard "desktop"-style compiled app -- it's a web-based application intended to be run on your own web server. The audience for this is probably small and somewhat more sophisticated than average -- so, one technical alternative would be to require people to obtain their own API key and secret to use the app. That seems like a lot of work, however, and a fairly large barrier to entry to anybody using this. Anybody know or have any insight on what sort of trouble I'm letting myself in for if I put both the secrets and the API keys in the config for my app and check it into Github for all the world to see?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >