Search Results

Search found 7436 results on 298 pages for 'simultaneous calls'.

Page 3/298 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Home PBX to answer/take external calls via PSTN

    - by ageis23
    I have a Thomson 585V6 router which has built in voip support. I want to be able to use a softphone to make calls. for example phone my dad's mobile. Any incomming calls to my normal bt number should be taken via my pc as well. What I have done so far: I've wired the pstn port on the router to the telephone jack. The router is connected to my pc. I have installed asterisk on the pc I want to take calls on. The sip client authenticates to the sip server. output from twinkle: Sun 22:46:45 home, registration succeeded (expires = 3600 seconds) how do I take external calls/ answer incoming calls from pstn?

    Read the article

  • International Calls For Free

    Are you sick of paying high fees for making international calls? There are much better ways out there and doing your research on the best product for you will save you a ton of money. First of course... [Author: Matthew Bailey - Computers and Internet - April 21, 2010]

    Read the article

  • Does Your Website Have ?Calls to Action??

    Does your website have ?Calls to Action?? If your answer is ?NO? then you have greatly diminished the ?Goal Realization Capability? of your website. Think about an interested visitor to your websit... [Author: Neil Paige - Web Design and Development - September 03, 2009]

    Read the article

  • Tip: Replacing Html.Encode Calls With New Html Encoding Syntax

    Like the well disciplined secure developer that you are, when you built your ASP.NET MVC 1.0 application, you remembered to call Html.Encode every time you output a value that came from user input. Didnt you? Well, in ASP.NET MVC 2 running on ASP.NET 4, those calls can be replaced with the new HTML encoding syntax (aka code nugget). Ive written a three part series on the topic. Html Encoding Code Blocks With ASP.NET 4 Html Encoding Nuggets With ASP.NET MVC 2 Using AntiXss as the default...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    - by mehfuzh
    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start let’s first consider the following dummy database and entity class. public class Person {     public virtual string Name { get; set; }     public virtual int Age { get; set; } }   public interface IDataBase {     T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>(). By default, the behavior in JustMock is override , which is similar to other popular mocking tools. By override it means that the tool will consider always the latest user setup. Therefore, the first example will return the latest entity every-time and will fail in line #12: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   Mock.Arrange(() => database.Get<Person>()).Returns(() => queue.Dequeue()); Mock.Arrange(() => database.Get<Person>()).Returns(person2);   // this will fail Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode());   Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); We can solve it the following way using a Queue and that removes the item from bottom on each call: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   queue.Enqueue(person1); queue.Enqueue(person2);   Mock.Arrange(() => database.Get<Person>()).Returns(queue.Dequeue());   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); This will ensure that right entity is returned but this is not an elegant solution. So, in JustMock we introduced a  new option that lets you set up your expectations sequentially. Like: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Mock.Arrange(() => database.Get<Person>()).Returns(person1).InSequence(); Mock.Arrange(() => database.Get<Person>()).Returns(person2).InSequence();   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); The  “InSequence” modifier will tell the mocking tool to return the expected result as in the order it is specified by user. The solution though pretty simple and but neat(to me) and way too simpler than using a collection to solve this type of cases. Hope that helps P.S. The example shown in my blog is using interface don’t require a profiler  and you can even use a notepad and build it referencing Telerik.JustMock.dll, run it with GUI tools and it will work. But this feature also applies to concrete methods that includes JM profiler and can be implemented for more complex scenarios.

    Read the article

  • Determining Cost of API Calls

    - by Sam
    [This is a cross-post originally posted by me in SO. I think the question is more appropriate here.] I was going through the adwords API and came across their rate sheet - http://code.google.com/apis/adwords/docs/ratesheet.html . They charge $0.25 per 1000 API units and under the 'Operation Costs' sections list the cost (in API units) of different API calls. I am curious - based on what factors do they (and others API developers) calculate the cost of an API call? Is there any simple formula or a standard way to determine this? Note: When I say 'cost' of an API call, I don't mean the money but the API units. For example, how do you determine one API call costs 100 'units' and another 1000?

    Read the article

  • Create new variable or make multiple chained calls?

    - by Rodrigo
    What is the best way to get this attributes, thinking in performance and code quality? Using chained calls: name = this.product.getStock().getItems().get(index).getName(); id = this.product.getStock().getItems().get(index).getId(); Creating new variable: final item = this.product.getStock().getItems().get(index); name = item.getName(); it = item.getId(); I prefer the second way, to let the code cleaner. But I would like to see some opinions about it. Thank you!

    Read the article

  • Routing PHP memcached calls to Oracle Coherence

    - by cj
    A new post Getting Started with the Coherence Memcached Adaptor from David Felcey shows how PHP memcached calls can automatically be routed to store data in Oracle Coherence 12c. This is possible now Coherence 12.1.3 supports Memcached clients using the Binary Memcached protocol. David's post shows how the Coherence Memcached adaptor can be configured as a proxy service that runs in the Coherence cluster. There's nothing particular to configure in the PHP application, except to enable memcached.use_sasl = 1 So what is Coherence? It is an "in-memory data grid solution", with a number of advanced features. You can read more in the Oracle Coherence 12C Data Sheet.

    Read the article

  • How to handle mutiple API calls using javascript/jquery

    - by James Privett
    I need to build a service that will call multiple API's at the same time and then output the results on the page (Think of how a price comparison site works for example). The idea being that as each API call completes the results are sent to the browser immediately and the page would get progressively bigger until all process are complete. Because these API calls may take several seconds each to return I would like to do this via javascript/jquery in order to create a better user experience. I have never done anything like this before using javascript/jquery so I was wondering if there was any frameworks/advice that anyone would be willing to share.

    Read the article

  • Unity iOS optimization and draw calls

    - by vzm
    I am curious of what methods I should approach in optimizing my Unity project for iOS hardware. I have very little image effects running (directional light with low res shadows) and I used the combine children script from the standard assets to lessen the load on the CPU. My project currently runs with 45-57 draw calls at non-intensive segments and up to 178 at intensive segments. I heard that static batching relieves some of the stress, but the game has the environment moving around the player instead of the player moving around the environment. Is there any alternative that I may look towards to improving the draw call number?

    Read the article

  • Build graph of dependencies (calls) in javascript [on hold]

    - by Maximus
    I'm new to a project and I see that everything is so interwoven that small changes here makes stuff break there. I'd like to refactor it and separate into modules. For that I'm going to need a tool that can build a graph of dependencies (calls) to visualize the connections. There are many tools like that for languages like C#, but I've found little information about the available tools for JavaScript. Has anyone done something like this? What tools have you used?

    Read the article

  • Using "public" vars or attributes in class calls, functional approach

    - by marw
    I was always wondering about two things I tend to do in my little projects. Sometimes I will have this design: class FooClass ... self.foo = "it's a bar" self._do_some_stuff(self) def _do_some_stuff(self): print(self.foo) And sometimes this one: class FooClass2 ... self.do_some_stuff(foo="it's a bar") def do_some_stuff(self, foo): print(foo) Although I roughly understand the differences between functional and class approaches, I struggle with the design. For example, in FooClass the self.foo is always accessible as an attribute. If there are numerous calls to it, is that faster than making foo a local variable that is passed from method to method (like in FooClass2)? What happens in memory in both cases? If FooClass2 is preferred (ie. I don't need to access foo) and other attributes inside do not change their states (the class is executed once only and returns the result), should the code then be written as a series of functions in a module?

    Read the article

  • Pattern for limiting number of simultaneous asynchronous calls

    - by hitch
    I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing? Would something like this be appropriate (pseudocode, obviously)? private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>; public List<externalSystemObjects> GetObjects(List<string> ids) { int callCount = 0; int maxCallCount = 25; WaitHandle[] handles; foreach(id in itemIds to get) { if(callCount < maxCallCount) { WaitHandle handle = executeCall(id, callback); addWaitHandleToWaitArray(handle) } else { int returnedCallId = WaitHandle.WaitAny(handles); removeReturnedCallFromWaitHandles(handles); } } WaitHandle.WaitAll(handles); return returnedObjects; } public void callback(object result) { returnedObjects.Add(result); }

    Read the article

  • Intercept method calls in Groovy for automatic type conversion

    - by kerry
    One of the cooler things you can do with groovy is automatic type conversion.  If you want to convert an object to another type, many times all you have to do is invoke the ‘as’ keyword: def letters = 'abcdefghijklmnopqrstuvwxyz' as List But, what if you are wanting to do something a little fancier, like converting a String to a Date? def christmas = '12-25-2010' as Date ERROR org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '12-25-2010' with class java.lang.String' to class 'java.util.Date' No bueno! I want to be able to do custom type conversions so that my application can do a simple String to Date conversion. Enter the metaMethod. You can intercept method calls in Groovy using the following method: def intercept(name, params, closure) { def original = from.metaClass.getMetaMethod(name, params) from.metaClass[name] = { Class clazz -> closure() original.doMethodInvoke(delegate, clazz) } } Using this method, and a little syntactic sugar, we create the following ‘Convert’ class: // Convert.from( String ).to( Date ).using { } class Convert { private from private to private Convert(clazz) { from = clazz } static def from(clazz) { new Convert(clazz) } def to(clazz) { to = clazz return this } def using(closure) { def originalAsType = from.metaClass.getMetaMethod('asType', [] as Class[]) from.metaClass.asType = { Class clazz -> if( clazz == to ) { closure.setProperty('value', delegate) closure(delegate) } else { originalAsType.doMethodInvoke(delegate, clazz) } } } } Now, we can make the following statement to add the automatic date conversion: Convert.from( String ).to( Date ).using { new java.text.SimpleDateFormat('MM-dd-yyyy').parse(value) } def christmas = '12-25-2010' as Date Groovy baby!

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • SQL RDBMS : one query or multiple calls

    - by None None
    After looking around the internet, I decided to create DAOs that returned objects (POJOs) to the calling business logic function/method. For example: a Customer object with a Address reference would be split in the RDBMS into two tables; Customer and ADDRESS. The CustomerDAO would be in charge of joining the data from the two tables and create both an Address POJO and Customer POJO adding the address to the customer object. Finally return the fulll Customer POJO. Simple, however, now i am at a point where i need to join three or four tables and each representing an attribute or list of attributes for the resulting POJO. The sql will include a group by but i will still result with multiple rows for the same pojo, because some of the tables are joining a one to many relationship. My app code will now have to loop through all the rows trying to figure out if the rows are the same with different attributes or if the record should be a new POJO. Should I continue to create my daos using this technique or break up my Pojo creation into multiple db calls to make the code easier to understand and maintain?

    Read the article

  • Calling home, receiving calls and smartphone data from the US

    - by Rob Farley
    I got asked about calling home from the US, by someone going to the PASS Summit. I found myself thinking “there should be a blog post about this”... The easiest way to phone home is Skype - no question. Use WiFi, and if you’re calling someone who has Skype on their phone at the other end, it’s free. Even if they don’t, it’s still pretty good price-wise. The PASS Summit conference centre has good WiFI, as do the hotels, and plenty of other places (like Starbucks). But if you’re used to having data all the time, particularly when you’re walking from one place to another, then you’ll want a sim card. This also lets you receive calls more easily, not just solving your data problem. You’ll need to make sure your phone isn’t locked to your local network – get that sorted before you leave. It’s no trouble to drop by a T-mobile or AT&T store and getting a prepaid sim. You can’t get one from the airport, but if the PASS Summit is your first stop, there’s a T-mobile store on 6th in Seattle between Pine & Pike, so you can see it from the Sheraton hotel if that’s where you’re staying. AT&T isn’t far away either. But – there’s an extra step that you should be aware of. If you talk to one of these US telcos, you’ll probably (hopefully I’m wrong, but this is how it was for me recently) be told that their prepaid sims don’t work in smartphones. And they’re right – the APN gets detected and stops the data from working. But luckily, Apple (and others) have provided information about how to change the APN, which has been used by a company based in New Zealand to let you get your phone working. Basically, you send your phone browser to http://unlockit.co.nz and follow the prompts. But do this from a WiFi place somewhere, because you won’t have data access until after you’ve sorted this out... Oh, and if you get a prepaid sim with “unlimited data”, you will still need to get a Data Feature for it. And just for the record – this is WAY easier if you’re going to the UK. I dropped into a T-mobile shop there, and bought a prepaid sim card for five quid, which gave me 250MB data and some (but not much) call credit. In Australia it’s even easier, because you can buy data-enabled sim cards that work in smartphones from the airport when you arrive. I think having access to data really helps you feel at home in a different place. It means you can pull up maps, see what your friends are doing, and more. Hopefully this post helps, but feel free to post comments with extra information if you have it. @rob_farley

    Read the article

  • Windows Server 2003 - Handling hundreds of simultaneous downloads

    - by Paul Hinett
    At the moment I have a single server with 4 1TB hard disks, daily I haver over 150 MP3 music files uploaded (around 80mb each). At busy periods there is over 300 people streaming / downloading these mixes all at once, 75% of the activity is on the most recently uploaded stuff which is all on a single hard disk. My read speads on the hard disk are very low due to such high activity of 200+ reads all happening at the same time on a single hard disk (ran some tests with HDTach). What would be a logical solution to solve this, a couple of ideas I had are: Load balance with another server Install faster hard disks (what are best these days? SCSI / SATA) Spread the most accessed files over the 4 drives so it is sharing the load between all 4 disks, instead of all the most accessed (most recent) all on the most recently installed drive. Obviouslly load balance is the most expensive option, but would it dramatically help? Some help on this situation would be great!

    Read the article

  • BES 5.0 and MAPI calls to exchange system

    - by nysingh
    We have been using BES 4.1(5) for a while now and it has been a resource hog on exchange due to high number of MAPI calls. I have heard that BES 5.0 is even worse. the comparison i heard is that BES 4.1 is makes MAPI calls equal to 5 outlook clients per BB user and BES 5.0 makes MAPI calls equal to 10 outlook clients per BB user. can someone confirm if it is true? is BES 5.0 is really that bad in MAPI calls and for exchange performance. ? thanks

    Read the article

  • Bandwidth sizing for simultaneous RDP sessions

    - by Gareth Marlow
    We're doing some DR scenario planning which will require up to 150 users to RDP into their desktop machines (mainly running Windows XP) over our VPN. We have a 2mbit uncontended internet connection at the moment but there's scope to upgrade this and also to use a secondary SDSL line to give us more bandwidth. Typical bandwidth figures I've seen suggest to plan for 64kbps per session, which works out to 9.6mbps in total. I'd like to know: Does anyone have any real-world data which would support these estimates? Are there any operational 'gotcha's that we need to be aware of? Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >