Search Results

Search found 9015 results on 361 pages for 'wireless range'.

Page 331/361 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • User input being limited to the alphabet in python

    - by Danger Cat
    I am SUPER new to programming and have my first assignment coming up in python. I am writing a hangman type game, where users are required to guess the word inputted from the other user. I have written most of the code, but the only problem I am having is when users have to input the word, making sure it is only limited to the alphabet. The code I have so far is : word = str.lower(raw_input("Type in your secret word! Shhhh... ")) answer = True while answer == True: for i in range(len(word)): if word[i] not in ("abcdefghijklmnopqrstuvwxyz"): word = raw_input("Sorry, words only contain letters. Please enter a word ") break else: answer = False This works while I input a few tries, but eventually will either exit the loop or displays an error. Is there any easier way to use this? We've really only covered topics up to loops in class, and break and continue are also very new to me. Thank you! (Pardon if the code is sloppy, but as I said I am very new to this....)

    Read the article

  • Using classes for the first time,help in debugging

    - by kaushik
    here is post my code:this is no the entire code but enough to explain my doubt.please discard any code line which u find irrelavent enter code here saving_tree={} isLeaf=False class tree: global saving_tree rootNode=None lispTree=None def __init__(self,x): file=x string=file.readlines() #print string self.lispTree=S_expression(string) self.rootNode=BinaryDecisionNode(0,'Root',self.lispTree) class BinaryDecisionNode: global saving_tree def __init__(self,ind,name,lispTree,parent=None): self.parent=parent nodes=lispTree.getNodes(ind) print nodes self.isLeaf=(nodes[0]==1) nodes=nodes[1]#Nodes are stored self.name=name self.children=[] if self.isLeaf: #Leaf Node print nodes #Set the leaf data self.attribute=nodes print "LeafNode is ",nodes else: #Set the question self.attribute=lispTree.getString(nodes[0]) self.attribute=self.attribute.split() print "Question: ",self.attribute,self.name tree={} tree={str(self.name):self.attribute} saving_tree=tree #Add the children for i in range(1,len(nodes)):#Since node 0 is a question # print "Adding child ",nodes[i]," who has ",len(nodes)-1," siblings" self.children.append(BinaryDecisionNode(nodes[i],self.name+str(i),lispTree,self)) print saving_tree i wanted to save some data in saving_tree{},which i have declared previously and want to use that saving tree in the another function outside the class.when i asked to print saving_tree it printing but,only for that instance.i want the saving_tree{} to have the data to store data of all instance and access it outside. when i asked for print saving_tree outside the class it prints empty{}.. please tell me the required modification to get my required output and use saving_tree{} outside the class..

    Read the article

  • Python server open all ports

    - by user1670178
    I am trying to open all ports using this code, why can I not create a loop to perform this function? http://www.kellbot.com/2010/02/tutorial-writing-a-tcp-server-in-python/ #!/usr/bin/python # This is server.py file ##server.py from socket import * #import the socket library n=1025 while n<1050: ##let's set up some constants HOST = '' #we are the host PORT = n #arbitrary port not currently in use ADDR = (HOST,PORT) #we need a tuple for the address BUFSIZE = 4096 #reasonably sized buffer for data ## now we create a new socket object (serv) ## see the python docs for more information on the socket types/flags serv = socket( AF_INET,SOCK_STREAM) ##bind our socket to the address serv.bind((ADDR)) #the double parens are to create a tuple with one element serv.listen(5) #5 is the maximum number of queued connections we'll allow serv = socket( AF_INET,SOCK_STREAM) ##bind our socket to the address serv.bind((ADDR)) #the double parens are to create a tuple with one element serv.listen(5) #5 is the maximum number of queued connections we'll allow print 'listening...' n=n+1 conn,addr = serv.accept() #accept the connection print '...connected!' conn.send('TEST') conn.close() How do I make this work so that I can specify input range and have the server open all ports up to 65535? #!/usr/bin/python # This is server.py file from socket import * #import the socket library startingPort=input("\nPlease enter starting port: ") startingPort=int(startingPort) #print startingPort def connection(): ## let's set up some constants HOST = '' #we are the host PORT = startingPort #arbitrary port not currently in use ADDR = (HOST,PORT) #we need a tuple for the address BUFSIZE = 4096 #reasonably sized buffer for data def socketObject(): ## now we create a new socket object (serv) serv = socket( AF_INET,SOCK_STREAM) def bind(): ## bind our socket to the address serv = socket( AF_INET,SOCK_STREAM) serv.bind((ADDR)) #the double parens are to create a tuple with one element serv.listen(5) #5 is the maximum number of queued connections we'll allow serv = socket( AF_INET,SOCK_STREAM) print 'listening...' def accept(): conn,addr = serv.accept() #accept the connection print '...connected!' conn.send('TEST') def close(): conn.close() ## Main while startingPort<65535: connection() socketObject() bind() accept() startingPort=startingPort+1

    Read the article

  • finding the maximum in series

    - by peril brain
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • Difference between these two functions that find Palindromes....

    - by Moin
    I wrote a function to check whether a word is palindrome or not but "unexpectedly", that function failed quite badly, here it is: bool isPalindrome (const string& s){ string reverse = ""; string original = s; for (string_sz i = 0; i != original.size(); ++i){ reverse += original.back(); original.pop_back(); } if (reverse == original) return true; else return false; } It gives me "string iterator offset out of range error" when you pass in a string with only one character and returns true even if we pass in an empty string (although I know its because of the intialisation of the reverse variable) and also when you pass in an unassigned string for example: string input; isPalindrome(input); Later, I found a better function which works as you would expect: bool found(const string& s) { bool found = true; for (string::const_iterator i = s.begin(), j = s.end() - 1; i < j; ++i, --j) { if (*i != *j) found = false; } return found; } Unlike the first function, this function correctly fails when you give it an unassigned string variable or an empty string and works for single characters and such... So, good people of stackoverflow please point out to me why the first function is so bad... Thank You.

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

  • Saving associated domain classes in Grails

    - by Cesar
    I'm struggling to get association right on Grails. Let's say I have two domain classes: class Engine { String name int numberOfCylinders = 4 static constraints = { name(blank:false, nullable:false) numberOfCylinders(range:4..8) } } class Car { int year String brand Engine engine = new Engine(name:"Default Engine") static constraints = { engine(nullable:false) brand(blank:false, nullable:false) year(nullable:false) } } The idea is that users can create cars without creating an engine first, and those cars get a default engine. In the CarController I have: def save = { def car = new Car(params) if(!car.hasErrors() && car.save()){ flash.message = "Car saved" redirect(action:index) }else{ render(view:'create', model:[car:car]) } } When trying to save, I get a null value exception on the Car.engine field, so obviously the default engine is not created and saved. I tried to manually create the engine: def save = { def car = new Car(params) car.engine = new Engine(name: "Default Engine") if(!car.hasErrors() && car.save()){ flash.message = "Car saved" redirect(action:index) }else{ render(view:'create', model:[car:car]) } } Didn't work either. Is Grails not able to save associated classes? How could I implement such feature?

    Read the article

  • Error In VB.Net code

    - by Michael
    I am getting the error "Format Exception was unhandled at "Do While objectReader.Peek < -1 " and after. any help would be wonderful. 'Date: Class 03/20/2010 'Program Purpose: When code is executed data will be pulled from a text file 'that contains the named storms to find the average number of storms during the time 'period chosen by the user and to find the most active year. between the range of 'years 1990 and 2008 Option Strict On Public Class frmHurricane Private _intNumberOfHuricanes As Integer = 5 Public Shared _intSizeOfArray As Integer = 7 Public Shared _strHuricaneList(_intSizeOfArray) As String Private _strID(_intSizeOfArray) As String Private _decYears(_intSizeOfArray) As Decimal Private _decFinal(_intSizeOfArray) As Decimal Private _intNumber(_intSizeOfArray) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'The frmHurricane load event reads the Hurricane text file and 'fill the combotBox object with the data. 'Initialize an instance of the StreamReader Object and declare variable page 675 on the book Dim objectReader As IO.StreamReader Dim strLocationAndNameOfFile As String = "C:\huricanes.txt" Dim intCount As Integer = 0 Dim intFill As Integer Dim strFileError As String = "The file is not available. Please restart application when available" 'This is where we code the file if it exist. If IO.File.Exists(strLocationAndNameOfFile) Then objectReader = IO.File.OpenText(strLocationAndNameOfFile) 'Read the file line by line until the file is complete Do While objectReader.Peek <> -1 **_strHuricaneList(intCount) = objectReader.ReadLine() _strID(intCount) = objectReader.ReadLine() _decYears(intCount) = Convert.ToDecimal(objectReader.ReadLine()) _intNumber(intCount) = Convert.ToInt32(objectReader.ReadLine()) intCount += 1** Loop objectReader.Close() 'With any luck the data will go to the Data Box For intFill = 0 To (_strID.Length - 1) Me.cboByYear.Items.Add(_strID(intFill)) Next Else MsgBox(strFileError, , "Error") Me.Close() End If End Sub

    Read the article

  • Silverlight 4 caching issue?

    - by DavidS
    I am currently experiencing a weird caching problem it would seem. When I load my data intially, I return all the data within given dates and my graph looks as follows: Then I filter the data to return a subset of the original data for the same date range (not that it matters) and I get the following view of my data: However, I intermittently get the following when I refresh the same filterd view of the data: One can see that not all the data gets cached but only some of it i.e. for 12 Dec 2010 and 5 dec 2010(not shown here). I've looked at my queries and the correct data is getting pulled out. It is only on the presentation layer i.e. on Mainpage.xaml.cs that this erroneous data seems to exist. I've stepped through the code and the data is corect through all the layers except on the presentation layer. Has anyone experienced this before? Is there some sort of caching going in the background that is keeping that data in the background as I've got browser caching off? I am using the LoadOperation in the callback method within the Load method of the DomainContext if that helps...

    Read the article

  • Adding fields to Django form dynamically (and cleanly)

    - by scott
    Hey guys, I know this question has been brought up numerous times, but I'm not quite getting the full implementation. As you can see below, I've got a form that I can dynamically tell how many rows to create. How can I create an "Add Row" link that tells the view how many rows to create? I would really like to do it without augmenting the url... # views.py def myView(request): if request.method == "POST": form = MyForm(request.POST, num_rows=1) if form.is_valid(): return render_to_response('myform_result.html', context_instance=RequestContext(request)) else: form = MyForm(num_rows=1) return render_to_response('myform.html', {'form':form}, context_instance=RequestContext(request)) # forms.py class MyForm(forms.Form): def __init__(self, *args, **kwargs): num_rows = kwargs.pop('num_rows',1) super(MyForm, self).__init__(*args, **kwargs) for row in range(0, num_rows): field = forms.CharField(label="Row") self.fields[str(row)] = field # myform.html http://example.com/myform <form action="." method="POST" accept-charset="utf-8"> <ul> {% for field in form %} <li style="margin-top:.25em"> <span class="normal">{{ field.label }}</span> {{ field }} <span class="formError">{{ field.errors }}</span> </li> {% endfor %} </ul> <input type="submit" value="Save"> </form> <a href="ADD_ANOTHER_ROW?">+ Add Row</a>

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Is safe ( documented behaviour? ) to delete the domain of an iterator in execution

    - by PoorLuzer
    I wanted to know if is safe ( documented behaviour? ) to delete the domain space of an iterator in execution in Python. Consider the code: import os import sys sampleSpace = [ x*x for x in range( 7 ) ] print sampleSpace for dx in sampleSpace: print str( dx ) if dx == 1: del sampleSpace[ 1 ] del sampleSpace[ 3 ] elif dx == 25: del sampleSpace[ -1 ] print sampleSpace 'sampleSpace' is what I call 'the domain space of an iterator' ( if there is a more appropriate word/phrase, lemme know ). What I am doing is deleting values from it while the iterator 'dx' is running through it. Here is what I expect from the code : Iteration versus element being pointed to (*): 0: [*0, 1, 4, 9, 16, 25, 36] 1: [0, *1, 4, 9, 16, 25, 36] ( delete 2nd and 5th element after this iteration ) 2: [0, 4, *9, 25, 36] 3: [0, 4, 9, *25, 36] ( delete -1th element after this iteration ) 4: [0, 4, 9, 25*] ( as the iterator points to nothing/end of list, the loop terminates ) .. and here is what I get: [0, 1, 4, 9, 16, 25, 36] 0 1 9 25 [0, 4, 9, 25] As you can see - what I expect is what I get - which is contrary to the behaviour I have had from other languages in such a scenario. Hence - I wanted to ask you if there is some rule like "the iterator becomes invalid if you mutate its space during iteration" in Python? Is it safe ( documented behaviour? ) in Python to do stuff like this?

    Read the article

  • iOS: Best Way to do This w/o Calling Method 32 Times?

    - by user1886754
    I'm currently retrieving the Top 100 Scores for one of my leaderboards the following way: - (void) retrieveTop100Scores { __block int totalScore = 0; GKLeaderboard *myLB = [[GKLeaderboard alloc] init]; myLB.identifier = [Team currentTeam]; myLB.timeScope = GKLeaderboardTimeScopeAllTime; myLB.playerScope = GKLeaderboardPlayerScopeGlobal; myLB.range = NSMakeRange(1, 100); [myLB loadScoresWithCompletionHandler:^(NSArray *scores, NSError *error) { if (error != nil) { NSLog(@"%@", [error localizedDescription]); } if (scores != nil) { for (GKScore *score in scores) { NSLog(@"%lld", score.value); totalScore += score.value; } NSLog(@"Total Score: %d", totalScore); [self loadingDidEnd]; } }]; } The problem is I want to do this for 32 leaderboards. What's the best way of achieving this? Using a third party tool (Game Center Manager), I can get the following line to return a dictionary with leaderboard ID's as keys and the Top 1 highest score as values NSDictionary *highScores = [[GameCenterManager sharedManager] highScoreForLeaderboards:leaderboardIDs]; So my question is, how can I combine those 2 segments of code to pull in the 100 values for each leaderboard, resulting in a dictionary with all of the leaderboard names as keys, and the Total (of each 100 scores) of the leaderboards for values.

    Read the article

  • Is there any library/software available that can give me useful measurements of image quality?

    - by Thor84no
    I realise measuring image quality in software is going to be really difficult, and I'm not looking for a quick-fix. Googling this is largely showing up research papers and discussions that go a bit over my head, so I was wondering if anyone in the SO community had any experience with doing any rough image quality assessment? I want to use this to scan a few thousand images and whittle it down to a few dozen images that are most likely of poor quality. I could then show these to a user and leave the rest to them. Obviously there are many metrics that can be a part of whether an image is of high/low quality, I'd be happy with anything that could take an image as an input and give some reasonable metrics to any of the basic image quality metrics like sharpness, dynamic range, noise, etc., leaving it up to my software to determine what's acceptable and what isn't. Some of the images are poor quality because they've been up-scaled drastically. If there isn't a way of getting metrics like I suggested above, is there any way to detect that an image has been up-scaled like this?

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • singledispatch in class, how to dispatch self type

    - by yanxinyou
    Using python3.4. Here I want use singledispatch to dispatch different type in __mul__ method . The code like this : class Vector(object): ## some code not paste @functools.singledispatch def __mul__(self, other): raise NotImplementedError("can't mul these type") @__mul__.register(int) @__mul__.register(object) # Becasue can't use Vector , I have to use object def _(self, other): result = Vector(len(self)) # start with vector of zeros for j in range(len(self)): result[j] = self[j]*other return result @__mul__.register(Vector) # how can I use the self't type @__mul__.register(object) # def _(self, other): pass # need impl As you can see the code , I want support Vector*Vertor , This has Name error Traceback (most recent call last): File "p_algorithms\vector.py", line 6, in <module> class Vector(object): File "p_algorithms\vector.py", line 84, in Vector @__mul__.register(Vector) # how can I use the self't type NameError: name 'Vector' is not defined The question may be How Can I use class Name a Type in the class's method ? I know c++ have font class statement . How python solve my problem ? And it is strange to see result = Vector(len(self)) where the Vector can be used in method body .

    Read the article

  • C/C++ Bit Array or Bit Vector

    - by MovieYoda
    Hi, I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts - Are they used as boolean flags? Can one use int arrays instead? (more memory of course, but..) What's this concept of Bit-Masking? If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers? I am looking for applications, so that I can understand better. for Eg - Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing numbers? For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?

    Read the article

  • help to reiterate through my jquery snippet

    - by s2xi
    Code in question: $("#alpha").click(function(event) { event.preventDefault(); $("#show").slideToggle(); }); I have a list of files and its being outputted with PHP in alphabetical. I use this method in PHP: foreach(range('A','Z') as $i) { if (array_key_exists ("$i", $alpha)) { echo '<div id="alpha"><a href="#" name="'.$i.'"><h2>'.$i.'</h2></a></div><div id="show">'; foreach ($$i as $key=>$value) echo '<p>'.$value.' '.$key.'</p>'; } echo '</div>'; } What I want to do is when the user clicks on the #alpha to toggle the div #show that has the names that belong to a letter up. I can do this with the first listing, but every listing after that isn't affected. how can i tell jquery that foreach letter apply the js code so it can toggle up/down the #show. I don't want to this 26 times (one time for each letter in the alphabet), I tried to use class instead of id but that causes all the #show to toggleup heh.

    Read the article

  • Is a python dictionary the best data structure to solve this problem?

    - by mikip
    Hi I have a number of processes running which are controlled by remote clients. A tcp server controls access to these processes, only one client per process. The processes are given an id number in the range of 0 - n-1. Were 'n' is the number of processes. I use a dictionary to map this id to the client sockets file descriptor. On startup I populate the dictionary with the ids as keys and socket fd of 'None' for the values, i.e no clients and all pocesses are available When a client connects, I map the id to the sockets fd. When a client disconnects I set the value for this id to None, i.e. process is available. So everytime a client connects I have to check each entry in the dictionary for a process which has a socket fd entry of None. If there are then the client is allowed to connect. This solution does not seem very elegant, are there other data structures which would be more suitable for solving this? Thanks

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • Python - creating a list with 2 characteristics bug

    - by user2733911
    The goal is to create a list of 99 elements. All elements must be 1s or 0s. The first element must be a 1. There must be 7 1s in total. import random import math import time # constants determined through testing generation_constant = 0.96 def generate_candidate(): coin_vector = [] coin_vector.append(1) for i in range(0, 99): random_value = random.random() if (random_value > generation_constant): coin_vector.append(1) else: coin_vector.append(0) return coin_vector def validate_candidate(vector): vector_sum = sum(vector) sum_test = False if (vector_sum == 7): sum_test = True first_slot = vector[0] first_test = False if (first_slot == 1): first_test = True return (sum_test and first_test) vector1 = generate_candidate() while (validate_candidate(vector1) == False): vector1 = generate_candidate() print vector1, sum(vector1), validate_candidate(vector1) Most of the time, the output is correct, saying something like [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] 7 True but sometimes, the output is: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 2 False What exactly am I doing wrong?

    Read the article

  • Creating transparent PNG with exact RGBA values

    - by rrowland
    I'm color-coding a transparent image to be read programatically. However, the image seems to be getting compressed and my code is reading color values other than the ones I mean to pass it. Concept This is the output I get, exporting as PNG-24. I programatically check each pixel for one of the six colors I use in creating the image: 0x00000F 0x0000F0 0x000F00 0x00F000 0x0F0000 0xF00000 Each color represents a different texture to apply. Top right (0x00000F) will pull texture from the tile to its top right and blend it at a ratio equal to the opacity of the pixel. The end goal is to create a hex tiled grid with differing textures that blend smoothly. What's happening It seems that when converting to PNG, Photoshop will change the RGBA to make it smoother, or just to help compression size. Parts that should be 250 red range anywhere from 150 to 255. Question Whether using PNG or another web-compatible format, I need to be able to save these pixel values, essentially instructions, loss-less and still maintain transparency. Is this possible in any format or will I need to re-think my approach?

    Read the article

  • Blit SDL_Surface onto another SDL_Surface and apply a colorkey

    - by NordCoder
    I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT-POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work. Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]): // Create an empty surface with the same settings as the original image SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height, image->format->BitsPerPixel, #if SDL_BYTEORDER == SDL_BIG_ENDIAN 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff #else 0x000000ff, 0x0000ff00, 0x00ff0000, 0xff000000 #endif ); // Map RGBA color to pixel format value Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format, static_cast<Uint8>(colorKey.R * 255), static_cast<Uint8>(colorKey.G * 255), static_cast<Uint8>(colorKey.B * 255), static_cast<Uint8>(colorKey.A * 255)); SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat); // Blit the image onto the padded image SDL_BlitSurface(image, NULL, paddedImage, NULL); SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat); Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem. I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >