Search Results

Search found 51569 results on 2063 pages for 'version number'.

Page 1591/2063 | < Previous Page | 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598  | Next Page >

  • Cross platform unicode path handling

    - by Matt Joiner
    I'm using boost::filesystem for cross-platform path manipulation, but this breaks down when calls need to be made down into interfaces I don't control that won't accept UTF-8. For example when using the Windows API, I need to convert to UTF-16, and then call the wide-string version of whatever function I was about to call, and then convert any output back to UTF-8. While the wpath, and other w* forms of many of the boost::filesystem functions help keep sanity, are there any suggestions for how best to handle this conversion to wide-string forms where needed, while maintaining consistency in my own code?

    Read the article

  • Rails Installed But Not Working

    - by Luiz P
    Folks, Yesterday I installed Rails 3.2 on Ubuntu 12.10 and created a new project in order to check it. It was working OK. Today I tried to create a new project and got the message bellow (Portuguese version): O programa 'rails' pode ser encontrado nos seguintes pacotes: * rails * ruby-railties-3.2 Tente: sudo apt-get install <pacote selecionado> Something like: The program 'rails' could not be found in the follow packages: ... ... Try: sudo apt-get install <selected package> I run the command gem list and all the gems are listed, including Rails one. I tried to search on the web for a solution, but haven't found any. Thank you very much for your help. Luiz

    Read the article

  • dotNet Templated, Repeating, Databound ServerControl: Modifying underlying ServerControl data per te

    - by Campbeln
    I have a server control that wraps an underlying class which manages a number of indexes to track where it is in a dataset (ie: RenderedRecordCount, ErroredRecordCount, NewRecordCount, etc.). I've got the server control rendering great, but OnDataBinding I'm having an issue as to seems to happen after CreateChildControls and before Render (both of which properly manage the iteration of the underlying indexes). While I'm somewhat familiar with the ASP.NET page lifecycle, this one seems to be beyond me at the moment. So... How do I hook into the iterative process OnDataBinding uses so I can manage the underlying indexes? Will I have to iterate over the ITemplates myself, managing the indexes as I go or is there an easier solution? [edit: Agh... writing the problem out is very cathartic... I'm thinking this is exactly what I will need to do...] Also... I implemented the iteration of the underlying indexes during CreateChildControls originally in the belief that was the proper place to hook in for events like OnDataBinding (thinking it was done as the controls were being .Add'd). Now it seems that this may actually be unnecessary. So I guess the secondary question is: What happens during CreateChildControls? Are the unadulterated (read: with various <%-tags in place) controls added to the .Controls collection without any other processing?

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • XSD file, where to get xmlns argument?

    - by Daok
    <?xml version="1.0" encoding="utf-8"?> <xs:schema id="abc" targetNamespace="http://schemas.businessNameHere.com/SoftwareNameHere" elementFormDefault="qualified" xmlns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:mstns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="..." type="..." /> <xs:complexType name="..."> I am working on a project using XSD to generate .cs file. My question is concerning the string "http://schemas.businessNameHere.com/SoftwareNameHere" If I change it, it doesn't work. But the http:// is not a valid one... what is the logic behind and where can I can information about what to put there or how to change it?

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • ImageView doesn't rescale Image to selected size

    - by Buni
    I'm using a ImageView with a fixed size for adding an icon to a menu. In my application, I use it a lot of times, but on this ImageView the Layout Params seem to not work. Unlike the others ImageViews, in this case, I'm using a template directly, but I think that's not the problem. <?xml version="1.0" encoding="utf-8"?> <ImageView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="1dp" android:layout_height="1dp" android:scaleType="fitCenter" android:src="@drawable/ic_menu_moreoverflow_normal_holo_dark" android:contentDescription="ICON" /> Its been used in code as follows. ImageView iview =(ImageView) View.inflate(context, R.layout.icon, null); Theoretically, It should resize automatically the image, however, the images continues with the original size, although the size was 1dp. Where is the problem? Thanks a lot!

    Read the article

  • Algorithm to produce Cartesian product of arrays in depth-first order

    - by Yuri Gadow
    I'm looking for an example of how, in Ruby, a C like language, or pseudo code, to create the Cartesian product of a variable number of arrays of integers, each of differing length, and step through the results in a particular order: So given, [1,2,3],[1,2,3],[1,2,3]: [1, 1, 1] [2, 1, 1] [1, 2, 1] [1, 1, 2] [2, 2, 1] [1, 2, 2] [2, 1, 2] [2, 2, 2] [3, 1, 1] [1, 3, 1] etc. Instead of the typical result I've seen (including the example I give below): [1, 1, 1] [2, 1, 1] [3, 1, 1] [1, 2, 1] [2, 2, 1] [3, 2, 1] [1, 3, 1] [2, 3, 1] etc. The problem with this example is that the third position isn't explored at all until all combinations of of the first two are tried. In the code that uses this, that means even though the right answer is generally (the much larger equivalent of) 1,1,2 it will examine a few million possibilities instead of just a few thousand before finding it. I'm dealing with result sets of one million to hundreds of millions, so generating them and then sorting isn't doable here and would defeat the reason for ordering them in the first example, which is to find the correct answer sooner and so break out of the cartesian product generation earlier. Just in case it helps clarify any of the above, here's how I do this now (this has correct results and right performance, but not the order I want, i.e., it creates results as in the second listing above): def cartesian(a_of_a) a_of_a_len = a_of_a.size result = Array.new(a_of_a_len) j, k, a2, a2_len = nil, nil, nil, nil i = 0 while 1 do j, k = i, 0 while k < a_of_a_len a2 = a_of_a[k] a2_len = a2.size result[k] = a2[j % a2_len] j /= a2_len k += 1 end return if j > 0 yield result i += 1 end end UPDATE: I didn't make it very clear that I'm after a solution where all the combinations of 1,2 are examined before 3 is added in, then all 3 and 1, then all 3, 2 and 1, then all 3,2. In other words, explore all earlier combinations "horizontally" before "vertically." The precise order in which those possibilities are explored, i.e., 1,1,2 or 2,1,1, doesn't matter, just that all 2 and 1 are explored before mixing in 3 and so on.

    Read the article

  • Compare values for audit trail

    - by kagaku
    I'm attempting to develop an audit trail/tracking solution for an existing database written in PLSQL/PHP - however I'm still unsure as of yet on an easy (to implement and maintain) solution for tracking changes to fields/values. For instance, the project tracking portion of the DB APP tracks over 200 fields and ideally I'd like a nice way to show a history of changes, such as: 5/10/2010 - Project 435232 updated by John Doe Changed Project Name (Old: Test Project; New: Super Test Project) Changed Submission Date (Old: 5/10/2010; New: 5/11/2010) Changed Description (Old: This is an example!; New: This is a test example) Essentially for each field (db column) it would output a new line to show the old/new values. So far my current idea is saving the current version of the data to a temporary table, updating the primary table with the new data then loading each row into an array and doing an array compare to determine the differences. This seems a bit convoluted, and if there is an easier method I'd love to know it. Any ideas or suggestions are much appreciated!

    Read the article

  • Video playback on VideoView disappears after going back from another Activity

    - by pixel
    I have two Activities: one with VideoView and the second one. I start watching a video in the first Activity, then during playback I start second Activity. After going back to first Activity I can hear sound but see no picture. My Video Layout: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <VideoView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:id="@+id/videoView" android:layout_gravity="center" /> <ListView android:layout_width="fill_parent" android:layout_height="125dp" android:id="@+id/ListView" /> </LinearLayout> Do you have any ideas why video doesn't appear?

    Read the article

  • what is equal to WebClient in javascript or jquery ?

    - by kamiar3001
    I am using WebClient. This won't work because WebClient runs server side and therefore uses different session from the users. What is the client version in java script and jquery ? Edit Section : I found a solution but it gives me error var html = $.ajax({ url: mp, //complete: hideBlocker, async: false }).responseText; $("#HomeView").hide(); $("#ContentView").html(html); //in this line it gives me script error $("#ContentView").show("fast"); the error says : SCRIPT5007: 'undefined' is null or not an object the stop line is : var count = theForm.elements.length; debugger is Microsoft internet explorer 9.0 beta

    Read the article

  • License problem embedding Mono?

    - by mydiscogr
    I'd like to embed Mono into an .exe file but the problem is the license, because a LGPL library can only be linked with LGPL code. However, I'd like to build a commercial app, so I ask if is possible to use a stub that launches a DLL version of the Mono runtime and executes my app. Or do you know a better way to do this? I need a cross-platform framework and Mono seems good, but there are some problem to pack it in one file, so you know a "free" way to do this?

    Read the article

  • Do you have to install the REST starter kit in asp.net to access APIs?

    - by jonhobbs
    Hi, I'm currently trying to access a REST API for the first time using visual web developer 2008 express edition. Every article I have found says you have to install the WCF REST starter kit which is a .msi file, which would suggest that I have to install it on my machine and presumably our server too. My question is this. Is there a non installable version that I can use, e.g. just by dropping DLLs into the Bin directory and then using the classes contained. Or is there more to it than that and am I just getting very confused about how it works? Jon

    Read the article

  • Updating to Spring 2.5.5 causes a javax.servlet.UnavailableException: org.springframework.web.struts

    - by Averroes
    I have been told to update some application from Spring 2.0.8 to Spring 2.5.5. This application is using Struts 1.2.7. Once I change the Spring.jar I get the following exception while loading in JBoss 4.0.5: 10:14:57,579 ERROR [[/PortalRRHH]] Servlet /PortalRRHH threw load() exception javax.servlet.UnavailableException: org.springframework.web.struts.DelegatingTilesRequestProcessor This is defined in the struts-config.xml this way: <controller locale="true"> <set-property property="processorClass" value="org.springframework.web.struts.DelegatingTilesRequestProcessor"/> </controller> I have no clue of what is happening since it works with the old version of Spring and the DelegatingTilesRequestProcessor is still available in Spring 2.5.5. I have no previous experience with Struts so if you need anything else to figure what the problem is please ask and I will update the question. Thanks.

    Read the article

  • Requirements of an issue/bug tracker

    - by James Brooks
    I've been looking at various issue/bug trackers available on the net. There are some very good ones, but I'm unable to use them as my server does not support Perl/Ruby (for example), I'm not too bothered however because I am able to write code in PHP and as such would prefer something in that language. So I've taken it upon myself to write a custom issue tracker system. As of now it's in early planning stages, and before I continue, I'd like to find out what people need from such an application. My current list of things to add include: Creating/Editing/Deleting issues - both on user and admin level Related issues (similar to that of STO) Admins will be able to create builds/milestones and version control of projects Admins will be able to assign users/groups to a project Roadmap of projects Possible SVN integration with Git? What do you think? There are a couple more things I'd like to add, but I'm sure you'll think of a better way of adding such feature. What would you like to see from an issue tracker?

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • PHP Post Count in Forum

    - by Chris
    I'm currently desiging a forum application, I considered using a premade but decided against it as it's useful for me to learn some of the techniques. So I've written a fairly full featured forum... great. One of the problems I want to solve is to include user data for each post, at the minute the post table includes the poster ID (obviously) and I added the poster's username at a later date so I didn't have to query the User DB for X number of posts in a thread. However, it's become apparent I now want to do this, usernames don't need to update retrospectively, however avatars, sigs, and especially post counts need to update actively, so data in some form needs keeping up to date somewhere... What would be a good way of implementing this? I obviously don't want to include any more user data on the Posts DB table than necessary, but I'm struggling to find an easy way to do this short of querying the DB for each post in a thread, which is potentially going to create a lot of traffic. How have other people solved this, I've been examining the code on some other open source apps but I can't find what I'm looking for. Is it possible to select multiple records in one query? In which case I could build an array dynamically on each page request (eg 'SQL blah blah' then a for each loop to insert the ID's). Could I join the tables each time? Do I submit a query for each post? Hmm.

    Read the article

  • populating one checkedlistbox with another (checkedlistbox)

    - by 8thWonder
    I am having difficulties populating a checkedlistbox (CLB) based on the selection(s) made in another. It should also be noted that I have a "Select All" checkbox at the top that checks/unchecks all of the items in the first CLB. Here's the code: Private Sub chkSelectAll_CheckedChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles chkSelectAll.CheckedChanged For i As Integer = 0 To clb1.Items.Count - 1 clb1.SetItemChecked(i, chkSelectAll.Checked) Next End Sub Private Sub clb1_ItemCheck(ByVal sender As System.Object, ByVal e As System.Windows.Forms.ItemCheckEventArgs) Handles clb1.ItemCheck Dim i As Integer = clb1.SelectedIndex For j As Integer = 0 To al_2.Count - 1 If i = -1 Then For k As Integer = 0 To al_2.Count - 1 If Not clb2.Contains(al_2(k).sDate) Then clb2.Items.Add(al_2(k).sDate) Else : k += 1 End If Next ElseIf (e.NewValue = CheckState.Checked And al_2(j).sName = al_1(i)) Then clb2.Items.Add(al_2(j).sDate) ElseIf (e.NewValue = CheckState.Unchecked And al_2(j).sName = al_1(i)) Then clbProdBkups.Items.Remove(al_2(j).sDate) End If Next End Sub The first CLB is populated with an arraylist of values on the button click event. Based on whatever is checked in the first CLB, corresponding values from an arraylist of structures should fill the second CLB. The following code partially works until the "Select All" checkbox is clicked at which point one of two things happens: If other values have been selected before "Select All" is checked, the second CLB is filled with the correct number of corresponding values BUT only those of the most recently selected item of the first CLB instead of all of corresponding values of all of the items that were not already selected. When "Select All" is unchecked, the most recently incorrect values are removed, everything in CLB 1 is unchecked but the values in CLB 2 that were selected before "Select All" was checked remain. If "Select All" is checked before anything else is selected, I get an "unable to cast object of type 'System.String' to type 'System.Windows.Forms.Control'" error that points to the following statement from the itemcheck event: If Not clb2.Contains(al_2(k).sDate) Then Any insights will be greatly appreciated. ~8th

    Read the article

  • Error Converting PIL B&W images to Numpy Arrays

    - by Elliot
    I am getting weird errors when I try to convert a black and white PIL image to a numpy array. An example of the code I am working with is below. if image.mode != '1': image = image.convert('1') #convert to B&W data = np.array(image) #convert data to a numpy array n_lines = data.shape[0] #number of raster passes line_range = range(data.shape[1]) for l in range(n_lines): # process one horizontal line of the image line = data[l] for n in line_range: if line[n] == 1: write_line_to(xl, z+scale*n, speed) #conversion to other program code elif line[n] == 0: run_to(xl, z+scale*n) #conversion to other program code I have tried this using both array and asarray for the conversion, and gotten different errors. If I use array, then the data I get out is nothing like what I put in. It looks like several very shrunken partial images side by side, with the remainder of the image space filled in in black. If I use asarray, then the entirety of python crashes during the raster step (on a random line). If I work with a greyscale image ('L'), then neither of these errors occurs for either array or asarray. Does anyone know what I am doing wrong? Is there something odd about the way PIL encodes B&W images, or something special I need to pass numpy to make it convert properly?

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • How do I delete a file from depot, but leave local copy in tact?

    - by Gary
    I'm trying to learn Perforce and want to delete a file from the depot(easy to do with p4 delete, p4 submit), but that deletes it from the client machine dir structure as well. I want to keep my local file in my directory intact. The only way I can see to do this would be to move it out of the hierarchy that is under Perforce control before deleting. I was able to get my file back by syncing an earlier version. Maybe I set up my client workspace wrong? Or am I misunderstanding a fundamental concept of source control? The client workspace is /home/user and I did it this way so I could add any file under my home directory without getting an error about the file not being under client's root. FYI - Linux client and server running P4D/LINUX26X86/2009.1/222893 (2009/11/12) Any advice appreciated. Thanks.

    Read the article

  • How to make UISlider output nice rounded numbers exponentially?

    - by RickiG
    Hi I am implementing a UISlider a user can manipulate to set a distance. I have never used the CocoaTouch UISlider, but have used other frameworks sliders, usually there is a variable for setting the "step" and other "helper" properties. The documentation for the UISlider deals only with a max and min value, and the output is always a 6 decimal float with a linear relation to the position of the "slider nob". I guess I will have to implement the desired functionality step by step. To the user, the min/max values range from 10 m to 999 Km, I am trying to implement this in an exponential way, that will feel natural to the user. I.e. the user experiences a feeling of control over the values, big or small. Also that the "output" has reasonable values. Values like 10m 200m 2.5km 150 km etc. instead of 1.2342356 m or 108.93837756 km. I would like for the step size to increase by 10m for the first 200m, then maybe by 50m up to 500m, then when passing the 1000 m value, it starts to deal with Kilometers, so then it is step size = 1 km up until 50 km, then maybe 25 km steps etc. Any way I go about this I end up doing a lot of rounding and a lot of calculations wrapped in a forrest of if statements and NSString/Number conversions, each time the user moves the slider just a little. I was hoping someone could lend me a bit of inspiration/math help or make me aware of a more lean approach to solving this problem. My last idea is to populate and array with a 100 string values, then have the slider int value correspond to a string, this is not very flexible, but doable. Thank you in advance for any help given:)

    Read the article

  • Bandwidth for Silverlight Apps

    - by JAllen
    I have a idea of building sort of a simple online version of Microsoft Visio. The application will be built using silverlight capabilties. People will be able to design flowcharts similar to how they do in Visio and they will be able to collaborate and work simultaneously on the the design. Now, I need to get an idea of the bandwidth such an application might consume. I am not sure how silverligt internally work so I need to get an idea whether such an application can be built in a way that make it economically feasible to sell such a product in a software as a service model.

    Read the article

< Previous Page | 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598  | Next Page >