Search Results

Search found 23127 results on 926 pages for 'based'.

Page 562/926 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • Does Windows Azure support the Application Warm-Up module or something similar?

    - by Corey O'Brien
    We have a Web Role that we are hosting in Windows Azure that uses an old ASMX based Web Reference to contact an external system. The Web Reference proxy code is big enough that instantiating it the first time has a significant cost. We'd like to be able to have this run when the Web Role starts instead of on the first request. I know IIS 7.5 has an Application Warm-Up module that would allow us to achieve this, but I'm having trouble figuring out if something similar exists with hosting on Windows Azure. Thanks, Corey

    Read the article

  • Help needed inorder to avoid recursion

    - by Srivatsa
    Hello All I have a method, which gives me the required number of Boxes based on number of devices it can hold.Currently i have implemented this logic using recursion private uint PerformRecursiveDivision(uint m_oTotalDevices,uint m_oDevicesPerBox, ref uint BoxesRequired) { if (m_oTotalDevices< m_oDevicesPerBox) { BoxesRequired = 1; } else if ((m_oTotalDevices- m_oDevicesPerBox>= 0) && (m_oTotalDevices- m_oDevicesPerBox) < m_oDevicesPerBox) { //Terminating condition BoxesRequired++; return BoxesRequired; } else { //Call recursive function BoxesRequired++; return PerformRecursiveDivision((m_oTotalDevices- m_oDevicesPerBox), m_oDevicesPerBox, ref BoxesRequired); } return BoxesRequired; } Is there any better method to implement the same logic without using recursion. Cos this method is making my application very slow for cases when number of devices exceeds 50000. Thanks in advance for your support Constant learner

    Read the article

  • basic sql group by with percentage

    - by David in Dakota
    I have an issue and NO it is not homework, it's just a programmer who has been away from SQL for a long time having to solve a problem. I have the following table: create table students( studentid int identity(1,1), [name] varchar(200), [group] varchar(10), grade numeric(9,2) ) go The group is something arbitrary, assume it's the following "Group A", "Group B"... and so on. The grade is on a scale of 0 - 100. If there are 5 students in each group with grades randomly assigned, what is the best approach to getting the top 3 students (the top 80%) based on their grade? To be more concrete if I had the following: Ronald, Group A, 84.5 George H, Group A, 82.3 Bill, Group A, 92.0 George W, Group A, 45.5 Barack, Group A, 85.0 I'd get back Ronald, Bill, and Barack. I'd also need to do this over other groups.

    Read the article

  • Textarea for my web application

    - by Parhs
    Hello ... I am developing an application web based which would let the users extend a part of an applcation using javascript via java.scripting http://java.sun.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html The problem is that the user should write only some coditions and i want if possible to colour some special words and to check while typing if the variable that the user is typing exists.. Like a auto suggest thing... I searched at the web but didnt find many stuff. For WYSIWYG editors i couldnt determine if you can programaticcaly apply stuff and set/get caret position. Also i show that autosuggest is nearly impossible. It should be locatted outside of the textbox area.

    Read the article

  • Which are the Extreme Programming "core" practices?

    - by MiKo
    Recently, I began reading about agile methodologies and XP in particular. I am a bit confused, though, about what are considered the practices involved in extreme programming. More precisely: Wikipedia reports 12 practices, which I someway believe to be the "classic" ones. Both Kent Beck and Ron Jeffries indicate 13 practices (you can find the links at the bottom of wikipedia page about "Extreme Programming Practices", I cannot post them here since I am new user of Stack Overflow), while this review of Kent Beck's "XP explained" (2nd edition) report more than 20 somewhat different practices. As a complete beginner in the topic (and basically as a complete beginner as a programmer), I would like to be enlightened on the matter. My impression is that I should look at Beck's book, since the second edition has been written after several years of XPerience, but I can find a lot less material based on that.

    Read the article

  • Performance benefits of upgrading Richfaces to newer version

    - by peteDog
    I have a client that's running an application based on JBoss 4.0.5, Seam 1.2 and RichFaces 3.0.1. Their system is having performance problems due to the fact that a lot of data is coming back from the server to be displayed on screen and it seems like the rendering of that data is taking forever. The data brought back is displayed in a tabbed interface, but the tabs aren't currently being loaded individually, but all at once. I'm trying to build up a case to present to the client on the benefits of upgrading to never version of RichFaces, which, as I understand it, has added a great number of features related to tabbed panels and being able to use ajax to page the data and load the chunks you actually need to display at the moment, and not the rest that's in other tabs. The move to a newer version of RichFaces will also result in never versions of Jboss and Seam, as the current production build of RichFaces 3.2.1 requires JSF 1.2. IF anyone has some suggestions or experience on performance of current versions RichFaces, paging, etc, I would really appreciate some feedback.

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • Error: An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAV

    - by brz dot net
    I have to find the indentid from the status table based on below two conditions: 1. If there are more than one record having same indentid in status table and the same indentID has count1 in feasibilitystatus table then I don't want to display the record. 2. If there is only one record of indentid in status table and the same indentID has count0 in feasibilitystatus table then I don't want to display the record. Query: select distinct s.indentid from status s where s.status='true' and s.indentid not in( select case when count(s.indentid)>1 then (select indentid from feasibilitystatus group by indentid having count(indentid)>1) else (select indentid from feasibilitystatus group by indentid having count(indentid)>0) end as indentid from status) Error: An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.

    Read the article

  • Create Task Report from Mylyn?

    - by luis.espinal
    Hello all - is there a way to create a task/activity report (say a weekly report) off tasks managed with Mylyn? I've been using Rachota TimeTracker which allows me to create reports (in html format) http://rachota.sourceforge.net/en/demo.html I've just started using mylyn (our company uses Embarcadero JBuilder which is is based on Eclipse), but I don't see anywhere in the Eclipse or Embarcadero docs about reporting capabilities. Is it possible? Is it possible to query activities worked on a prior week and report statistics out of it (management like reports, you know;) I'm sure it is, but I haven't been able to google it out. Thanks.

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • Example of code generator you made from scratch?

    - by rosscj2533
    What are some examples of code generators you have used? I think it's a cool idea, but I have trouble thinking of things they can do besides make a class based on an object's attributes/database schema (as described in The Pragmatic Programmer). What language did you write them in and what language did they output? Edit: Thanks for the responses so far. What I am really looking for is examples of code generators made from scratch for some certain purpose. I mentioned it in the title, but didn't make it very clear in my question. How did you go about making a code generator on your own and what specificly did it achieve?

    Read the article

  • Do I need the bin\debug\appName.vshost.exe and appName.vshost.manifest in my SVN code repository?

    - by Bruce Lee
    I am building an application which is based on a sample application, written in C# on .NET 2, and is built on VS2008. This application is mostly a wrapper for a COM application. However I compile it in .NET 3.5. The sample application came with the following files in it's bin\debug: appName.vshost.exe appName.vshost.exe.manifest I noticed that I can delete the files and VS re-builds vshost.exe, and the vshost.manifest file appears with modification date the same as the deleted file as if VS has copied in from somewhere. My question is, should I put this files in my SVN code repository?

    Read the article

  • linux locale unset

    - by naugtur
    I have a ARM based machine with ubuntu distro on it and it often feeds me with this while running various commands: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "pl_PL.UTF-8" This is output of the locale command locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=pl_PL.UTF-8 LC_CTYPE="pl_PL.UTF-8" LC_NUMERIC="pl_PL.UTF-8" LC_TIME="pl_PL.UTF-8" LC_COLLATE="pl_PL.UTF-8" LC_MONETARY="pl_PL.UTF-8" LC_MESSAGES="pl_PL.UTF-8" LC_PAPER="pl_PL.UTF-8" LC_NAME="pl_PL.UTF-8" LC_ADDRESS="pl_PL.UTF-8" LC_TELEPHONE="pl_PL.UTF-8" LC_MEASUREMENT="pl_PL.UTF-8" LC_IDENTIFICATION="pl_PL.UTF-8" LC_ALL= What should I do to stop it from popping now and then and configure it properly for the aescznól [important characters of mine]?

    Read the article

  • How "duplicated" Java code is optimized by the JVM JIT compiler?

    - by Renan Vinícius Mozone
    I'm in charge of maintaining a JSP based application, running on IBM WebSphere 6.1 (IBM J9 JVM). All JSP pages have a static include reference and in this include file there is some static Java methods declared. They are included in all JSP pages to offer an "easy access" to those utility static methods. I know that this is a very bad way to work, and I'm working to change this. But, just for curiosity, and to support my effort in changing this, I'm wondering how these "duplicated" static methods are optimized by the JVM JIT compiler. They are optimized separately even having the exact same signature? Does the JVM JIT compiler "sees" that these methods are all identical an provides an "unified" JIT'ed code?

    Read the article

  • Cannot format first column in WxPython's ListCtrl

    - by Matt Clarke
    If I set the format of the first column in a ListCtrl to align centre (or align right) nothing happens. It works for the other columns. This only happens on Windows - I have tested it on Linux and it works fine. Does anyone know if there is a work-round or other solution? Here is an example based on code found at http://zetcode.com/wxpython/ import wx import sys packages = [('jessica alba', 'pomona', '1981'), ('sigourney weaver', 'new york', '1949'), ('angelina jolie', 'los angeles', '1975'), ('natalie portman', 'jerusalem', '1981'), ('rachel weiss', 'london', '1971'), ('scarlett johansson', 'new york', '1984' )] class Actresses(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, size=(380, 230)) hbox = wx.BoxSizer(wx.HORIZONTAL) panel = wx.Panel(self, -1) self.list = wx.ListCtrl(panel, -1, style=wx.LC_REPORT) self.list.InsertColumn(0, 'name', wx.LIST_FORMAT_CENTRE,width=140) self.list.InsertColumn(1, 'place', wx.LIST_FORMAT_CENTRE,width=130) self.list.InsertColumn(2, 'year', wx.LIST_FORMAT_CENTRE, 90) for i in packages: index = self.list.InsertStringItem(sys.maxint, i[0]) self.list.SetStringItem(index, 1, i[1]) self.list.SetStringItem(index, 2, i[2]) hbox.Add(self.list, 1, wx.EXPAND) panel.SetSizer(hbox) self.Centre() self.Show(True) app = wx.App() Actresses(None, -1, 'actresses') app.MainLoop()

    Read the article

  • jtabbedpane different sized tabs

    - by Eric
    hi, im making a search function for a database that uses a jtabbedpane with one tab for a quick search, and one for an advanced search. the advanced search has quite a few more fields, so it needs to be larger, but i dont want the whole window to always be at the largest size for aesthetic reasons. i have added a change listener to the pane, and tried resizing the pane based on which tab it is, and the code executes when the tab is clicked, but does nothing. ive tried using tabbedSearchPane.setSize() followed by tabbedSearchPane.validate(), as well as resizing components around it using setSize+validate, and nothing seems to do anything. i really just need to know how to ACTUALLY change the size of a jtabbedpane.

    Read the article

  • Combining lists but getting unique members

    - by MC
    I have a bit of a special requirement when combining lists. I will try to illustrate with an example. Lets say I'm working with 2 lists of GamePlayer objects. GamePlayer has a property called LastGamePlayed. A unique GamePlayer is identified through the GamePlayer.ID property. Now I'd like to combine listA and listB into one list, and if a given player is present in both lists I'd like to keep the value from listA. I can't just combine the lists and use a comparer because my uniqueness is based on ID, and if my comparer checks ID I will not have control over whether it picks the element of listA or listB. I need something like: for each player in listB { if not listA.Contains(player) { listFinal.Add(player) } } However, is there a more optimal way to do this instead of searching listA for each element in listB?

    Read the article

  • Date only from TextBoxFor()

    - by thekronos
    Hello, I'm having trouble displaying the only date part of a datetime into a textbox using TextBoxFor<,(expression, htmlAttributes). The model is based on Linq2SQL, field is a DateTime on SQL and in the Entity model. Failed : <%= Html.TextBoxFor(model => model.dtArrivalDate, String.Format("{0:dd/MM/yyyy}", Model.dtArrivalDate))%> Ps : this trick seems to be depreciated, any string value in the object htmlAttribute is ignored. Failed : [DisplayFormat( DataFormatString= "{0:dd/MM/yyyy}" )] public string dtArrivalDate { get; set; } I would like to store and display the date part only on the details/edit view without the "00:00:00" part. Any idea please ? Merry Chrismas from France to all by the way ;-)

    Read the article

  • 500px.com Ranking Algorithm

    - by alex
    I was recently wondering how http://500px.com calculates their "Pulse" rating. The "Pulse" is a score from 1..100 based on the popularity of the photo. I think it might use some of the following criteria: Number of likes Number of "favorites" Number of comments Total views maybe the time since the photo has been uploaded maybe some other non-obvious criteria like the users follower count, user rank, camera model or similar How would I achieve some sort of algorithm like this? Any advice on how to implement an algorithm with this criteria (and maybe some code) would be appreciated too.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • The right way to manage subviews in a UIControl

    - by ed94133
    (iPhone SDK 3.x:) I have a UIControl subclass that creates a different number of subviews depending on the length of an NSArray property. Please take my word for it that this needs to be a UIControl rather than a UIView. Currently I implement subview management in drawRect, beginning by removing all subviews and then creating the appropriate number based on the property. I don't think this is very good memory management and I'm not sure if drawRect is really the appropriate place to add subviews. Any thoughts on the best way to handle this pattern? Thank you.

    Read the article

  • Spring redirect: prefix issue

    - by Javi
    Hello, I have an application which uses Spring 3. I have a view resolver which builds my views based on a String. So in my controllers I have methods like this one. @RequestMapping(...) public String method(){ //Some proccessing return "tiles:tileName" } I need to return a RedirectView to solve the duplicate submission due to updating the page in the browser, so I have thought to use Spring redirect: prefix. The problem is that it only redirects when I user a URL alter the prefix (not with a name a resolver can understand). I wanted to do something like this: @RequestMapping(...) public String method(){ //Some proccessing return "redirect:tiles:tileName" } Is there any way to use RedirectView with the String (the resolvable view name) I get from the every controller method? Thanks

    Read the article

  • iPhone App link to iStore for commission

    - by Simon
    Is it possible to link from an iPhone application to the iStore so a user can (a) play a sample of music and then navigate to that track in order to buy it? In a bit more detail: the application lists a number of tracks for a particular artist (a recommendation by the app based on user criteria). The user scrolls down the list and finds a track that they are interested in. They play the 30 second sample (as you would in the iStore) and then, if they like it, they press on a link that takes them to the iStore where they can purchase the track. If they buy the track, then the application gets 5% of the money paid for the track. I have looked through the web and found a number of suggestions but nothing seems to fit the specification above. I would be very grateful if anyone is able to tell me whether this is possible and some clues as to how it would be done. Thanks, Simon...

    Read the article

  • Simple ranking algorithm in Groovy

    - by Richard Paul
    I have a short groovy algorithm for assigning rankings to food based on their rating. This can be run in the groovy console. The code works perfectly, but I'm wondering if there is a more Groovy or functional way of writing the code. Thinking it would be nice to get rid of the previousItem and rank local variables if possible. def food = [ [name:'Chocolate Brownie',rating:5.5, rank:null], [name:'Pizza',rating:3.4, rank:null], [name:'Icecream', rating:2.1, rank:null], [name:'Fudge', rating:2.1, rank:null], [name:'Cabbage', rating:1.4, rank:null]] food.sort { -it.rating } def previousItem = food[0] def rank = 1 previousItem.rank = rank food.each { item -> if (item.rating == previousItem.rating) { item.rank = previousItem.rank } else { item.rank = rank } previousItem = item rank++ } assert food[0].rank == 1 assert food[1].rank == 2 assert food[2].rank == 3 assert food[3].rank == 3 // Note same rating = same rank assert food[4].rank == 5 // Note, 4 skipped as we have two at rank 3 Suggestions?

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >