Search Results

Search found 8543 results on 342 pages for 'documentation'.

Page 267/342 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Save options selected in AlertDialog spawned from ItemizedOverlay onTap method

    - by ahsteele
    In the description of how to add a list of options to an AlertDialog the official Android documentation alludes to saving a users preferences with one of the "data storage techniques." The examples assume the AlertDialog has been spawned within an Activity class. In my case I've created a class that extends ItemizedOverlay. This class overrides the onTap method and uses an AlertDialog to prompt the user to make a multi-choice selection. I would like to capture and persist the selections for each OverlayItem they tap on. That said I am unsure if utilizing an AlertDialog in this manner is the right approach and open to other suggestions. protected boolean onTap(int index) { OverlayItem item = _overlays.get(index); final CharSequence[] items = { "WiFi", "BlueTooth" }; final boolean[] checked = { false, false }; AlertDialog.Builder builder = new AlertDialog.Builder(_context); builder.setTitle(item.getTitle()); builder.setMultiChoiceItems(items, checked, new DialogInterface.OnMultiChoiceClickListener() { @Override public void onClick(DialogInterface dialog, int item, boolean isChecked) { // for now just show that the user touched an option Toast.makeText(_context, items[item], Toast.LENGTH_SHORT).show(); } }); builder.setPositiveButton("Okay", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int id) { // should I be examining what was checked here? dialog.dismiss(); } }); builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); AlertDialog alert = builder.create(); alert.show(); return true; }

    Read the article

  • Webservices on iPhone - wsdl2objc - Sample Code?

    - by markmcgookin
    I have recently downloaded the most recent build of this awesome tool WSDL2OBJC from google code here: http://code.google.com/p/wsdl2objc/ After a bit of tweaking and downloading the latest version of the trunk from the svn repo I got a version that created the code for a WSDL I am using and compiles great and actually installs on my phone! However, I'm not doing anything with it yet, because I am not really sure how to. There is very little in the way of sample code on the site, and there is a sample file in the project if you download it, but again it is very complicated and there are no real bits of documentation. Has anyone managed to successfully use this stuff? It seems SOOO powerful and useful but from a look around the Internet, no one knows how to use it. We (all) would love someone who has figured it out to post a simple project or detailed walk-through of implementing this so we can put the code that lots of people have worked hard on to good use. If anyone has found a blog entry or has this information it would be great to see! I am totally stuck... with no errors. I would love to know how to use this now that it's all compiled successfully!

    Read the article

  • Problems with Merb on Snow Leopard

    - by hamhoagie
    I've recently started looking at Merb, for use with some small projects around the office. I'm trying to set up my first project following the docs, and am encountering an exception such as: foo:beta user$ merb Merb root at: /Users/user/code/merb/beta Loading init file from ./config/init.rb Loading ./config/environments/development.rb ~ Connecting to database... ~ Loaded slice 'MerbAuthSlicePassword' ... ~ Parent pid: 39794 ~ Compiling routes... ~ Activating slice 'MerbAuthSlicePassword' ... ~ ~ FATAL: Mongrel is not installed, but you are trying to use it. You need to either install mongrel or a different Ruby web server, like thin. I have installed Mongrel from gem as well as from MacPorts, and am confused by this exception. Significant stats: ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10] From my installed gems: merb (1.1.0) merb-action-args (1.1.0) merb-assets (1.1.0) merb-auth (1.1.0) merb-auth-core (1.1.0) merb-auth-more (1.1.0) merb-auth-slice-password (1.1.0) merb-cache (1.1.0) merb-core (1.1.0) merb-exceptions (1.1.0) merb-gen (1.1.0) merb-haml (1.1.0) merb-helpers (1.1.0) merb-mailer (1.1.0) merb-param-protection (1.1.0) merb-slices (1.1.0) merb_datamapper (1.1.0) mongrel (1.1.5) Merb documentation is non-existent, so I find myself stuck. Thanks in advance.

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • realloc() & ARC

    - by RynoB
    How would I be able to rewrite the the following utility class to get all the class string values for a specific type - using the objective-c runtime functions as shown below? The ARC documentation specifically states that realloc should be avoided and I also get the following compiler error on this this line: classList = realloc(classList, sizeof(Class) * numClasses); "Implicit conversion of a non-Objective-C pointer type 'void *' to '__unsafe_unretained Class *' is disallowed with ARC" The the below code is a reference to the original article which can be found here. + (NSArray *)classStringsForClassesOfType:(Class)filterType { int numClasses = 0, newNumClasses = objc_getClassList(NULL, 0); Class *classList = NULL; while (numClasses < newNumClasses) { numClasses = newNumClasses; classList = realloc(classList, sizeof(Class) * numClasses); newNumClasses = objc_getClassList(classList, numClasses); } NSMutableArray *classesArray = [NSMutableArray array]; for (int i = 0; i < numClasses; i++) { Class superClass = classList[i]; do { superClass = class_getSuperclass(superClass); if (superClass == filterType) { [classesArray addObject:NSStringFromClass(classList[i])]; break; } } while (superClass); } free(classList); return classesArray; } Your help will be much appreciated. Thanks

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Coherent access to mainframe files from Win32 application and IBM RDZ/Eclipse?

    - by Ira Baxter
    I have a suite of tools for processing IBM COBOL source code; these tools are built as Win32 applications and talk to Windows (including network) files using traditional Windows file system calls (open, close, read, write) and work just fine, thank you. I'd like to integrate these with Eclipse; we understand how to get Eclipse to do UI for us we think. The problem is that Eclipse/RDZ users access mainframe files through some IBM magic. In How does RDZ access mainframe files I tried to understand how Eclipse accessed files on a mainframe. Apparantly Eclipse/RDZ has a secret filesystem access backdoor not available to normal mortals. At issue is how our tools, reading some Windows-accessible file (local disk file, NFS to mainframe, ...) can associate such files with the files that Eclipse can access or is using? Ideally we'd like UI-integrated versions of our tools take an Eclipse file-name string for a mainframe file, pass it to our Windows application to process, have the Windows application open/read/process the file, and return results associated with that file to the Eclipse UI. Is there a canonical file name path that would be used with mainframe NFS that would be equivalent to the name or access object the Eclipse RDZ used to access the same file? Are all operations doable internally by Eclipse, doable by the mainframe NFS [for instance, can NFS read/update an element in a partitioned data set? Can Eclipse RDZ? Does it matter?] Is the mainframe file access available to custom Java code running under Eclipse RDZ (e.g., equivalents of open/close/read/write based on filename/path/something?) If so, can somebody steer me towards documentation describing the access methods? Anybody else already solve this problem or have a good suggestion?

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Objective-C Protocols within Protocols

    - by LucasTizma
    I recently began trying my hand at using protocols in my Objective-C development as an (obvious) means of delegating tasks more appropriately among my classes. I completely understand the basic notion of protocols and how they work. However, I came across a roadblock when trying to create a custom protocol that in turn implements another protocol. I since discovered the solution, but I am curious why the following DOES NOT work: @protocol STPickerViewDelegate < UIPickerViewDelegate > - ( void )customCallback; @end @interface STPickerView : UIPickerView { id < STPickerViewDelegate > delegate; } @property ( nonatomic, assign ) id < STPickerViewDelegate > delegate; @end Then in a view controller, which conforms to STPickerViewDelegate: STPickerView * pickerView = [ [ STPickerView alloc ] init ]; pickerView.delegate = self; - ( void )customCallback { ... } - ( NSString * )pickerView:( UIPickerView * )pickerView titleForRow:( NSInteger )row forComponent:( NSInteger )component { ... } The problem was that pickerView:titleForRow:forComponent: was never being called. On the other hand, customCallback was being called just fine, which isn't too surprising. I don't understand why STPickerViewDelegate, which itself conforms to UIPickerViewDelegate, does not notify my view controller when events from UIPickerViewDelegate are supposed to occur. Per my understanding of Apple's documentation, if a protocol (A) itself conforms to another protocol (B), then a class (C) that conforms to the first protocol (A) must also conform to the second protocol (B), which is exactly the behavior I want and expected. What I ended up doing was removing the id< STPickerViewDelegate > delegate property from STViewPicker and instead doing something like the following in my STViewPicker implementation where I want to evoke customCallback: if ( [ self.delegate respondsToSelector:@selector( customCallback ) ] ) { [ self.delegate performSelector:@selector( customCallback ) ]; } This works just fine, but I really am puzzled as to why my original approach did not work.

    Read the article

  • Zendx JQuery Autocomplete

    - by emeraldjava
    I've been trying to get the Zend Jquery autocomplete function working, when i noticed this section in the Zend documentation. The following UI widgets are available as form view helpers. Make sure you use the correct version of jQuery UI library to be able to use them. The Google CDN only offers jQuery UI up to version 1.5.2. Some other components are only available from jQuery UI SVN, since they have been removed from the announced 1.6 release. autoComplete($id, $value, $params, $attribs): The AutoComplete View helper will be included in a future jQuery UI version (currently only via jQuery SVN) and creates a text field and registeres it to have auto complete functionality. The completion data source has to be given as jQuery related parameters 'url' or 'data' as described in the jQuery UI manual. Does anybody know which svn url tag or branch i need to download to get a javascript file with the autocomplete functions available in it? At the moment, my Bootstrap.php has $view->addHelperPath('ZendX/JQuery/View/Helper/', 'ZendX_JQuery_View_Helper'); $view->jQuery()->enable(); $view->jQuery()->uiEnable(); Zend_Controller_Action_HelperBroker::addHelper( new ZendX_JQuery_Controller_Action_Helper_AutoComplete() ); // Add it to the ViewRenderer $viewRenderer = new Zend_Controller_Action_Helper_ViewRenderer(); $viewRenderer->setView($view); Zend_Controller_Action_HelperBroker::addHelper($viewRenderer); In my layout, i define the jquery ui version i want <?php echo $this->jQuery() ->setUiVersion('1.7.2');?> Finally my index.phtml has the autocomplete widget <p><?php $data = array('New York', 'Tokyo', 'Berlin', 'London', 'Sydney', 'Bern', 'Boston', 'Baltimore'); ?> <?php echo $this->autocomplete("ac1", "", array('data' => $data));?></p> I'm using Zend 1.8.3 atm.

    Read the article

  • Using httplib2 in python 3 properly? (Timeout problems)

    - by Sho Minamimoto
    Hey, first time post, I'm really stuck on httplib2. I've been reading up on it from diveintopython3.org, but it mentions nothing about a timeout function. I look up the documentation, but the only thing I see is an ability to put a timeout int but there are no units specified (seconds? milliseconds? What's the default if None?) This is what I have (I also have code to check what the response is and try again, but it's never tried more than once) h = httplib2.Http('.cache', timeout=None) for url in list: response, content = h.request(url) more stuff... So the Http object stays around until some arbitrary time, but I'm downloading a ton of pages from the same server, and after a while, it hangs on getting a page. No errors are thrown, the thing just hangs at a page. So then I try: h = httplib2.Http('.cache', timeout=None) for url in list: try: response, content = h.request(url) except: h = httplib2.Http('.cache', timeout=None) more stuff... But then it recreates another Http object every time (goes down the 'except' path)...I dont understand how to keep getting with the same object, until it expires and I make another. Also, is there a way to set a timeout on an individual request? Thanks for the help!

    Read the article

  • Picasa access in android: PicasaUploadActivity

    - by Glyptodon
    I am new to Android, and I'm struggling to figure out exactly what tools are available to me. I am developing for android 2.0.1 for now, just because that is what my device runs. Specifically, I am writing an app that I would like to upload images to a picasa album. I am almost sure this is supported; for example, the built in (google?) photo viewer has a 'share' button with a picasa option, and even a small bit of sample code, including the snippet [borrowed code! apologies if this is against the rules..] temp.setComponent(new ComponentName ("com.google.android.apps.uploader", "com.google.android.apps.uploader.picasa.PicasaUploadActivity")); startActivityForResult(temp, PICASA_INTENT) which looks like exactly what I want. But I can find no documentation anywhere. I am in fact quite unclear how to use this type of resource. From within eclipse, do I need to include another project, com.google.android.apps.uploader? If so, how do I get it? How do I include it? Is there any working sample code provided for me to peer at?

    Read the article

  • DB2Command ExecuteNonQuery Insert multiple rows problem

    - by DB2 Nubie
    I'm attempting to insert multiple rows into a DB2 database using C# code like this: string query = "INSERT INTO TESTDB2.RG_Table (V,E,L,N,Q,B,S,P) values" + "('lkjlkj', 'iouoiu', '2009-03-27 12:01:19', 'nnne', 'sdfdf', NULL, NULL, NULL)," + "('lkjlk2', 'iuoiu2', '2009-03-27 12:01:19', 'nnne2', 'sddf2', NULL, NULL, NULL)"; DB2Command cmd = new DB2Command(query, this.transactionConnection, this.transaction); cmd.ExecuteNonQuery(); If I stop building the query string after the first set of values is included it executes without an error. Attempting to load multiple values using this method results in the following error: Upload error : ERROR [42601] [IBM][DB2] SQL0104N An unexpected token "," was found following "". Expected tokens may include: "". SQLSTATE =42601 The SQL syntax matches that which I have read elsewhere, such as http://stackoverflow.com/questions/452859/inserting-multiple-rows-in-a-single-sql-query and IBM's documentation gives this example: cmd = conn.CreateCommand(); cmd.Transaction = trans; cmd.CommandText = "INSERT INTO company_a VALUES(5275, 'Sanders', 20, 'Mgr', 15, 18357.50), " + "(5265, 'Pernal', 20, 'Sales', NULL, 18171.25), " + "(5791, 'O''Brien', 38, 'Sales', 9, 18006.00)"; cmd.ExecuteNonQuery(); Can anyone explain what could account for this?

    Read the article

  • what is the best way to stream a audio file to website users/listners

    - by Naveen Chamikara Gamage
    I'm developing a music site which will stream audio files stored in a server to users, audio files will be played through flash player placed in a webpage.. As I heard I need to use a streaming media server for streaming audio files ( like 2mb to 3mb in size).. Do I need to use one? I found some streaming media server softwares like http://www.icecast.org - but as in their documentation, It is used for streaming radio stations and live streaming purposes, but I just need to stream audio files faster and in low size (low bandwidth) with good quality.. I heard I need to encode the audio files first and then send them to listeners and in their end audio files need to be decoded again. Is that true? How can I do that? if I need to use a special web server, where should I host my files? Any good hosting providers? if I host audio files in a normal web server, they will use HTTP or TCP to deliver my audio files to users/ listners but I found that HTTP and TCP are not good ways to use for multi media purposes like streaming audio and video files, and they are used for delivering HTML and stuff. I found I should use RSTP or UDP for streaming audio files.. What should I use? I know that .MP3 files has much better quality than the other formats but it also gives huge size to the audio files.. which format should I use for audio files? Most of the best quality audio files are more than 7mb so I'm planning to convert them my self using a software so I could get low size files with some level of good quality. If I'm converting my audio files what is the good BITRATE I should use for my files? Any known best softwares for converting audio files while keeping quality in a good level? Note** - I know that I will not need complex requirements at the beginning of the site but I wanted to what are the best ways like they are using for soundcloud.com

    Read the article

  • Should all public methods of an API be documented?

    - by cynicalman
    When writing "library" type classes, is it better practice to always write markup documentation (i.e. javadoc) in java or assume that the code can be "self-documenting"? For example, given the following method stub: /** * Copies all readable bytes from the provided input stream to the provided output * stream. The output stream will be flushed, but neither stream will be closed. * * @param inStream an InputStream from which to read bytes. * @param outStream an OutputStream to which to copy the read bytes. * @throws IOException if there are any errors reading or writing. */ public void copyStream(InputStream inStream, OutputStream outStream) throws IOException { // copy the stream } The javadoc seems to be self-evident, and noise that just needs to be updated if the funcion is changed at all. But the sentence about flushing and not closing the stream could be valuable. So, when writing a library, is it best to: a) always document b) document anything that isn't obvious c) never document (code should speak for itself!) I usually use b), myself (since the code can be self-documenting otherwise)...

    Read the article

  • YUI DataTable - Howto have just one paginator?

    - by Rollo Tomazzi
    Hello, I'm using the YUI DataTable in a Grails 1.1 project using the Grails UI plugin 1.0.2 (YUI being 2.6.1). By default, the DataTable displays 2 paginators: one above and another one below the table. Looking up the YUI API documentation, I could see that I can pass an array of YUI containers as a config parameter but - what are the names of these containers? I've tried loooking at the HTML of the page using Firebug. The ID of the divs containing the paginators are: yui-dt0-paginator0 (above) and yui-dt0-paginator1 (below). If I use them to configure the containers for the navigator, then the navigator is just not displayed at all. Here's the relevant extract of the GSP page containing the Datatable element. <div class="body"> <h1>This is the List of Control Accounts</h1> <g:if test="${flash.message}"> <div class="message">${flash.message}</div> </g:if> <div class="yui-skin-sam"> <gui:dataTable controller="controlAccount" action="enhancedListDataTableJSON" columnDefs="[ [key:'id', label:'ID'], [key:'col1', label:'Col 1', sortable: true, resizeable: true], [key:'col2', label:'Col 2', sortable: true, resizeable: true] ]" sortedBy="col1" rowsPerPage="20" paginatorConfig="[ template:'{PreviousPageLink} {PageLinks} {NextPageLink} {CurrentPageReport}', pageReportTemplate:'{totalRecords} total accounts', alwaysVisible:true, containers:'yui-dt0-paginator1' ]" rowExpansion="true" /> </div> </div> Any help? Thanks! Rollo

    Read the article

  • Which knowledge base/rule-based inference engine to choose for real time Runway incursion prevention

    - by Piligrim
    Hello, we are designing a project that would listen to dialog between airport controllers and pilots to prevent runway incursions (eg. one airplane is taking off while other is crossing the runway). Our professor wants us to use Jena for knowledge base (or anything else but it should be some sort of rule-based engine). Inference is not the main thing in Jena and there's not much documentation and examples of this. So we need an engine that would get messages from pilots as input and output possible risks of incursion or any other error in message protocol. It should be easy to write rules, and should be easy to provide engine with real time data. I image it something like this: A pilot sends a message that he lands on some runway, the system remembers that the runway is busy and no one should cross it If someone is given an instruction to cross this runway, the engine should fire a rule that something is wrong When the pilot sends a message that he left the runway and goes to the gate, the system clears the runway and lets other planes to use it. So is Jena, or prolog or any other rules engine suitable for this? I mean it is suitable, but do we really need to use it? I asked the prof. if we could just keep state of the runway and use some simple checks based on messages we receive and he said that it is not scalable and we need the knowledge base. Can someone give me any advise on which approach to use for this system? If you recommend k.b., then which one should we use? The project is written in java. Thank you.

    Read the article

  • Linux Kernel - Red/Black Trees

    - by CodeRanger
    I'm trying to implement a red/black tree in Linux per task_struct using code from linux/rbtree.h. I can get a red/black tree inserting properly in a standalone space in the kernel such as a module but when I try to get the same code to function with the rb_root declared in either task_struct or task_struct-files_struct, I get a SEGFAULT everytime I try an insert. Here's some code: In task_struct I create a rb_root struct for my tree (not a pointer). In init_task.h, macro INIT_TASK(tsk), I set this equal to RB_ROOT. To do an insert, I use this code: rb_insert(&(current-fd_tree), &rbnode); This is where the issue occurs. My insert command is the standard insert that is documented in all RBTree documentation for the kernel: int my_insert(struct rb_root *root, struct mytype *data) { struct rb_node **new = &(root->rb_node), *parent = NULL; /* Figure out where to put new node */ while (*new) { struct mytype *this = container_of(*new, struct mytype, node); int result = strcmp(data->keystring, this->keystring); parent = *new; if (result < 0) new = &((*new)->rb_left); else if (result > 0) new = &((*new)->rb_right); else return FALSE; } /* Add new node and rebalance tree. */ rb_link_node(&data->node, parent, new); rb_insert_color(&data->node, root); return TRUE; } Is there something I'm missing? Some reason this would work fine if I made a tree root outside of task_struct? If I make rb_root inside of a module this insert works fine. But once I put the actual tree root in the task_struct or even in the task_struct-files_struct, I get a SEGFAULT. Can a root node not be added in these structs? Any tips are greatly appreciated. I've tried nearly everything I can think of.

    Read the article

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Setting UIImage dimensions on UITableViewCell image

    - by bbrown
    I've got a standard UITableViewCell where I'm using the text and image properties to display a favicon.ico and a label. For the most part, this works really well since UIImage supports the ICO format. However, some sites (like Amazon.com say) have favicon.icos that make use of the ICO format's ability to store multiple sizes in the same file. Amazon stores four different sizes, all the way up to 48x48. This results in most images being 16x16 except for a few that come in at 32x32 or 48x48 and make everything look terrible. I have searched here, the official forum, the documentation, and elsewhere without success. I have tried everything that I could think of to constrain the image size. The only thing that worked was an undocumented method, which I'm not about to use. This is my first app and my first experience with Cocoa (came from C#). In case I wasn't clear in what I'm looking for, ideally the advice would center around setting the dimensions of the UIImage so that the 48x48 version would scale down to 16x16 or a method to tell UIImage to use the 16x16 version present in the ICO file. I don't necessarily need code: just a suggestion of an approach would do me fine. Does anyone have any suggestions? (I asked in the official forum as well because I've sunk more than a day into this already. If a solution is posted there, I'll put it here as well.)

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >