Search Results

Search found 19305 results on 773 pages for 'above the gods'.

Page 45/773 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • Can you help me think of problems for my programming language?

    - by I can't tell you my name.
    I've created an experimental toy programming language with a (now) working interpreter. It is turing-complete and has a pretty low-level instruction set. Even if everything takes four to six times more code and time than in PHP, Python or Ruby I still love programming all kinds of things in it. So I got the "basic" things that are written in many languages working: Hello World Input - Output Countdowns (not as easy as you think as there are no loops) Factorials Array emulation 99 Bottles of Beer (simple, wrong inflection) 99 Bottles of Beer (canonical) Conjatz conjecture Quine (that was a fun one!) Brainf*ck interpreter (To proof turing-completeness, made me happy) So I implemented all of the above examples because: They all used many different aspects of the language They are pretty interesting They don't take hours to write Now my problem is: I've run out of ideas! I don't find any more examples of what problems I could solve using my language. Do you have any programming problems which fit into some of the criteria above for me to work out?

    Read the article

  • iOS6 Twitter integration

    - by Peter Warbo
    There seems to be a difference between the iPhone simulator and actual device when checking if Twitter is available. I check if a Twitter account is setup by using this code: [SLComposeViewController isAvailableForServiceType:SLServiceTypeTwitter]; In the simulator there is a nice UIAlertView informing the user that there are no Twitter accounts setup and two buttons one for Settings and one for Cancel. However when I run my app on my device it will not show the above UIAlertView. Why is that? And how can I catch what button is tapped in the above UIAlertView (since I did not instantiate it?) This is what it looks like on the simulator:

    Read the article

  • Encoding problem with preg_replace() and scandir()

    - by itarato
    Hi, On OS-X (PHP5.2.11) I have a file: siësta.doc (and thousand other with Unicode filenames) and I want to convert the file names to a web-consumable format (a-zA-Z0-9.). If I hardcode the file name above I can do the right conversion: <?php $file = 'siësta.doc'; echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output: si_sta.doc ?> But if I read the file names with scandir, I've got strange conversions: <?php $files = scandir(DIRNAME); foreach ($files as $file) { echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output for the file above: sie_sta.doc } ?> I tried to detect the encoding, set the encoding, convert it with iconv functions. I tried the mb_ functions also. But it was just worse. What did I do wrong? Thanks in advance

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Why does calling the YUI Datatable showCellEditor not display the editor?

    - by t-boy
    Clicking on the second cell (any row) in the datatable causes the cell editor to display. But, I am trying to display the cell editor from code. The code looks like the following: var firstEl = oDataTable.getFirstTdEl(rowIndex); var secondCell = oDataTable.getNextTdEl(firstEl); oDataTable.showCellEditor(secondCell); When I debug into the datatable.js code (either with a click or from the code above) it follows the same path through the showCellEditor function but the above code will not display the editor. I am using YUI version 2.8.0r4.

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • Best practices for managing limited client licenses/login

    - by MicSim
    I have a multi-user software solution (containing different applications, i.e. EXEs) that should allow only a limited number of concurrent users. It's designed to run in an intranet. I dont have a really good, satisfactory solution to the problem of counting the client licenses yet. The key requirements are: Multiple instances (starts) of the same application (= process) should count as only one client licence Starting different applications of the software solution should also count as only one (the same) client licence Application crash should not lead to orphaned used licences The above should work also for Terminal Server environments (all clients same IP, but different install folders) I'm looking for estabilished patterns, solutions, tips for managing used client licenses. Specific hints for the above sitaution are also welcome.

    Read the article

  • Is it possible to use ContainsTable to get results for more than one column?

    - by LockeCJ
    Consider the following table: People FirstName nvarchar(50) LastName nvarchar(50) Let's assume for the moment that this table has a full-text index on it for both columns. Let's suppose that I wanted to find all of the people named "John Smith" in this table. The following query seems like a perfectly rational way to accomplish this: SELECT * from People p INNER JOIN CONTAINSTABLE(People,*,'"John*" AND "Smith*"') Unfortunately, this will return no results, assuming that there is no record in the People table that contains both "John" and "Smith" in either the FirstName or LastName columns. It will not match a record with "John" in the FirstName column, and "Smith" in the LastName column, or vice-versa. My question is this: How does one accomplish what I'm trying to do above? Please consider that the example above is simplified. The real table I'm working with has ten columns and the input I'm receiving is a single string which is split up based on standard word breakers (space, dash, etc.)

    Read the article

  • How to limit / throttle bandwidth with *multiple* connections

    - by Led
    I'm writing an app in C# that downloads concurrently (in different threads) using multiple connections to multiple servers, and I'd like to be able to limit the used bandwidth. For a single connection the solution would be simple; I'd use the solution posted here : http://www.codeproject.com/KB/IP/Bandwidth_throttling.aspx which calculates a sleep-time for the single connection. I'd like to know what the best way is to do this for multiple connections. Using the ThrottledStream posted above and dividing the bandwidth (say 2MB/sec) evenly among the connections isn't right, if I'd have 3 very slow connections and 1 very fast one they'd all be capped to 512kb/sec, so the fast one won't go above 512kb/sec and the other 3 wouldn't even make that. The preferred solution I think is to cap only the fastest connection(s) so the slower connections are used optimally. Does anyone have any experience with this, example code or any advice ?

    Read the article

  • How do you use the LINQ to SQL designer to generate accessor methods for subclasses?

    - by Pricey
    Above is the LINQ to SQL designer view for my data context. Below is the relevant code: public System.Data.Linq.Table<ActivityBase> ActivityBases { get { return this.GetTable<ActivityBase>(); } } ... [Table(Name="dbo.Activities")] [InheritanceMapping(Code="1", Type=typeof(ActivityBase), IsDefault=true)] [InheritanceMapping(Code="2", Type=typeof(Project))] [InheritanceMapping(Code="3", Type=typeof(ProjectActivity))] [InheritanceMapping(Code="5", Type=typeof(Task))] [InheritanceMapping(Code="4", Type=typeof(Activity))] public abstract partial class ActivityBase : INotifyPropertyChanging, INotifyPropertyChanged { ... Is there a way to generate accessor methods for the subclasses as shown in the inheritance mapping above (Project, Task, etc...) without doing it manually? I added them manually but then a change in the designer overwrites any manual changes. Am i doing this wrong? should I not be making accessors for the sub classes? filtering from ActivityBase seems worse to me. Thanks for any help on this.

    Read the article

  • Implicitly invoking parent class initializer

    - by Matt Joiner
    class A(object): def __init__(self, a, b, c): #super(A, self).__init__() super(self.__class__, self).__init__() class B(A): def __init__(self, b, c): print super(B, self) print super(self.__class__, self) #super(B, self).__init__(1, b, c) super(self.__class__, self).__init__(1, b, c) class C(B): def __init__(self, c): #super(C, self).__init__(2, c) super(self.__class__, self).__init__(2, c) C(3) In the above code, the commented out __init__ calls appear to the be the commonly accepted "smart" way to do super class initialization. However in the event that the class hierarchy is likely to change, I have been using the uncommented form, until recently. It appears that in the call to the super constructor for B in the above hierarchy, that B.__init__ is called again, self.__class__ is actually C, not B as I had always assumed. Is there some way in Python-2.x that I can overcome this, and maintain proper MRO when calling super constructors without actually naming the current class?

    Read the article

  • Outlook Marking Email as Junk Email

    - by robertabead
    I know. I sound like a spammer but these emails are completely legitimate email confirmations for people that have signed up for an account on this website we developed. These emails all make it through to various mail providers (gmail, yahoo, aol, hotmail/live) but they always get directed into the Outlook Junk Email folder. I am have tried using Zend Framework mail, PEAR Mail and phpMailer. All of those methods result in the same thing happening. This seemed to start happening after Microsoft released their update to the Outlook Junk Email filter in January of this year. Following is the code in question: include_once('Mail.php'); include_once('Mail/mime.php'); $hdrs = array( 'From' => "Membership <[email protected]>", 'Subject' => 'Test Email', 'Reply-To'=> "[email protected]", 'Message-ID'=> "<" . str_pad(rand(0,12345678),8,'0',STR_PAD_LEFT) . "@mail.example.com>", 'Date'=> date("D, j M Y H:i:s O",time()), 'To'=> '[email protected]' ); $params = array('host'=>'mail.example.com','auth'=>false,'localhost' => 'www.example.com','debug'=>false); $crlf = "\n"; $mime = new Mail_mime($crlf); $mime->setTXTBody("TEST"); $mime->setHTMLBody("<html>\n<body>\nTest\n</body>\n</html>"); $body = $mime->get(); $hdrs = $mime->headers($hdrs); $mail =& Mail::factory('smtp',$params); $t=$mail->send('[email protected]', $hdrs, $body); As you can see we are using the PEAR Mail functionality in this test. This is the most basic test we could run and the above generated email gets dumped into the Outlook Junk Email folder. We have reverse DNS on the mail server and it matches the forward DNS, SPF and DKIM are set up and there is nothing "spammy" with the above content. Can anybody see something with the above code that could cause Outlook to mark it as Junk? Thanks!

    Read the article

  • Is this an F# quotations bug?

    - by ControlFlow
    [<ReflectedDefinition>] let rec x = (fun() -> x + "abc") () The sample code with the recursive value above produces the following F# compiler error: error FS0432: [<ReflectedDefinition>] terms cannot contain uses of the prefix splice operator '%' I can't see any slicing operator usage in the code above, looks like a bug... :) Looks like this is the problem with the quotation via ReflectedDefinitionAttribute only, normal quotation works well: let quotation = <@ let rec x = (fun() -> x + "abc") () in x @> produces expected result with the hidden Lazy.create and Lazy.force usages: val quotation : Quotations.Expr<string> = LetRecursive ([(x, Lambda (unitVar, Application (Lambda (unitVar0, Call (None, String op_Addition[String,String,String](String, String), [Call (None, String Force[String](Lazy`1[System.String]), [x]), Value ("abc")])), Value (<null>)))), (x, Call (None, Lazy`1[String] Create[String](FSharpFunc`2[Unit,String]), [x])), (x, Call (None, String Force[String](Lazy`1[String]), [x]))], x) So the question is: is this an F# compiler bug or not?

    Read the article

  • Grails Unit Tests: Why does this statement fail?

    - by leeand00
    I've developed in Java in the past, and now I'm trying to learn Grails/Groovy using this slightly dated tutorial. import grails.test.* class DateTagLibTests extends TagLibUnitTestCase { def dateTagLib protected void setUp() { super.setUp() dateTagLib = new DateTagLib() } protected void tearDown() { super.tearDown() } void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) // NOTE: This statement fails assertEquals("the years dont match and I dont know why.", expected, dateTagLib.thisYear()) } } DateTagLibTests.groovy (Note: this TagLibUnitTestCase is for Grails 1.2.1 and not the version used in the tutorial) For some reason the above test fails with: expected:<2010 but was:<2010 I've tried replacing the test above with the following alternate version of the test, and the test passes just fine: void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) String actual = dateTagLib.thisYear() // NOTE: The following two assertions work: assertEquals("the years don\'t match", expected, actual) assertTrue("the years don\'t match", expected.equals(actual)) } These two versions of the test are basically the same thing right? Unless there's something new in Grails 1.2.1 or Groovy that I'm not understanding. They should be of the same type because the values are both the value returned by Calendar.getInstance().get(Calendar.YEAR)

    Read the article

  • xml intellisense with Eclipse Europa?

    - by Maddy.Shik
    Please suggest some free tools or some in build feature in Eclipse for intellisense of xml. I am using Eclipse Europa which is for EE java developers. <?xml version ="1.0" encoding = "UTF-8"?> <tests xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://jtestcase.sourceforge.net/dtd/jtestcase2.xsd"> </tests> Xml code above is part of xml file i am working upon. I am expecting eclipse to suggest me tags based upon xsd specified above. I am using ctrl+space for suggestion of new tag. Please suggest if i am using wrong key for intellisense or am i missing some tool or plugin. Thanks

    Read the article

  • populate a comboBox in Griffon App dynamically

    - by kulkarni
    I have 2 comboBoxes in my View of Griffon App (or groovy swingBuilder) country = comboBox(items:country(), selectedItem: bind(target:model, 'country', value:model.country), actionPerformed: controller.getStates) state = comboBox(items:bind(source:model, sourceProperty:'states'), selectedItem: bind(target:model, 'state', value:model.state)) The getStates() in the controller, populates @Bindable List states = [] in the model based on the country selected. The above code doesn't give any errors, but the states are never populated. I changed the states from being List to a range object(dummy), it gives me an error MissingPropertyException No such property items for class java.swing.JComboBox. Am I missing something here? There are a couple of entries related to this on Nabble but nothing is clear. The above code works if I had a label instead of a second comboBox.

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • python: how to convert list of lists into a single nested list

    - by Bhuski
    I have a python list of lists as shown below: mylist=[ [['orphan1', ['some value1']]], [['parent1', ['child1', ['child', ['some value2']]]]], [['parent1', ['child2', ['child', ['some value3']]]]] ] I need to convert the above list to some thing like this: result=[ ['orphan1', ['some value1']], ['parent1', ['child1', ['child', ['some value2']]], ['child2', ['child', ['some value3']]]] ] Kindly help me approach this problem. I have given only simple list. In actual scenario here, in my list, even grand parents/grand childs are there. How much ever deep the input nested list is, I need to convert it to a single nested list, with common list elements (parents and grand parents) appearing only once. (but the next to innermost list element('child' in above example) should appear as many times it occurs in the input list. I have been trying to do this last two days, but did not end up with working solution :(. I need to use the output in django template filter: unordered_list so that the resultant nested list appears as a nested unordered list in my html page ..

    Read the article

  • fuzzy implementaion for capture specific strings

    - by kasun-456
    I am going to develop a web crawler using java to capture hotel room prices from hotel websites. In this case i want to capture room price with the room type and the meal type, so my algorithm should intelligent for that. as an example: Room type: Delux Meal type: HalfBoad price : $20.00 The main problem is room prices can be in different different ways in different different hotel sites. so my algorithm should independent from hotel sites. I am plan to use above room types and meal types as a fuzzy sets and compare the words in webpage with above fuzzy sets using a suitable membership function. any one experienced with this??? or have an Idea for my problem??

    Read the article

  • Problem with stackless python, cannot write to a dict

    - by ANON
    I have simple map-reduce type algorithm, which I want to implement in python and make use of multiple cores. I read somewhere that threads using native thread module in 2.6 dont make use of multiple cores. is that true? I even implemented it using stackless python however i am getting into weird errors [Update: a quick search showed that the stack less does not allows multiple cores So are their any other alternatives?] def Propagate(start,end): print "running Thread with range: ",start,end def maxVote(nLabels): count = {} maxList = [] maxCount = 0 for nLabel in nLabels: if nLabel in count: count[nLabel] += 1 else: count[nLabel] = 1 #Check if the count is max if count[nLabel] > maxCount: maxCount = count[nLabel]; maxList = [nLabel,] elif count[nLabel]==maxCount: maxList.append(nLabel) return random.choice(maxList) for num in range(start,end): node=MapList[num] nLabels = [Label[k] for k in Adj[node]] if (nLabels!=[]): Label[node] = maxVote(nLabels) else: Label[node]=node However in above code the values assigned to Label, that is the change in dictionary are lost. Above propagate function is used as callable for MicroThreads (i.e. TaskLets)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • excel import query error

    - by pmms
    mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $result=mysql_query("select * from jos_users INTO OUTFILE 'users.csv' FIELDS ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' "); header("Content-type: text/plain"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Content-Transfer-Encoding: binary"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; in the above code in query string i.e string in side mysql_quey we are getting following error Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in C:\wamp\www\samples\mysql_excel\exel_outfile.php on line 8 in query string '\n' charter is not identifying as string thats why above error getting

    Read the article

  • Creating multiple log files of different content with log4j

    - by Daniel
    Is there a way to configure log4j so that it outputs different levels of logging to different appenders? I'm trying to set up multiple log files. The main log file would catch all INFO and above messages for all classes. (In development, it would catch all DEBUG and above messages, and TRACE for specific classes.) Then, I would like to have a separate log file. That log file would catch all DEBUG messages for a specific subset of classes, and ignore all messages for any other class. Is there a way to get what I'm after? Thanks, Dan

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >