Search Results

Search found 30 results on 2 pages for 'underflow'.

Page 2/2 | < Previous Page | 1 2 

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • How to implement a simple queue properly?

    - by Stephen Hsu
    The current Go library doesn't provide the queue container. To implement a simple queue, I use circle array as the underlying data structure. It follows algorithms mentioned in TAOCP: Insert Y into queue X: X[R]<-Y; R<-(R+1)%M; if R=F then OVERFLOW. Delete Y from queue X: if F=R then UNDERFLOW; Y<-X[F]; F<-(F+1) % M. F: Front, R: Rear, M: Array length. Following is the code: package main import ( "fmt" ) type Queue struct { len int head, tail int q []int } func New(n int) *Queue { return &Queue{n, 0, 0, make([]int, n)} } func (p *Queue) Enqueue(x int) bool { p.q[p.tail] = x p.tail = (p.tail + 1) % p.len return p.head != p.tail } func (p *Queue) Dequeue() (int, bool) { if p.head == p.tail { return 0, false } x := p.q[p.head] p.head = (p.head + 1) % p.len return x, true } func main() { q := New(10) for i := 1; i < 13; i++ { fmt.Println(i, q.Enqueue(i)) } fmt.Println() for i := 1; i < 13; i++ { fmt.Println(q.Dequeue()) } } But the output is obviously wrong: 1 true 2 true 3 true 4 true 5 true 6 true 7 true 8 true 9 true 10 false 11 true 12 true 11 true 12 true 0 false 0 false 0 false 0 false 0 false 0 false 0 false 0 false 0 false 0 false I think I need one more field to make the code work properly. What do you suggest?

    Read the article

  • An error saying unidentifed function "push", "pop", and"display" occurs, what should i add to fix i

    - by Alesha Aris
    #include<stdio.h> #include<iostream.h> #include<conio.h> #include<stdlib.h>(TOP) #include<fstream.h> #define MAX 5 int top = -1; int stack_arr[MAX]; main() { int choice; while(1) { printf("1.Push\n"); printf("2.Pop\n"); printf("3.Display\n"); printf("4.Quit\n"); printf("Enter your choice : "); scanf("%d",&choice); switch(choice) { case 1 : push(); break; case 2: pop(); break; case 3: display(); break; case 4: exit(1); default: printf("Wrong choice\n"); }/*End of switch*/ }/*End of while*/ }/*End of main()*/ push() { int pushed_item; if(top == (MAX-1)) printf("Stack Overflow\n"); else { printf("Enter the item to be pushed in stack : "); scanf("%d",&pushed_item); top=top+1; stack_arr[top] = pushed_item; } }/*End of push()*/ pop() { if(top == -1) printf("Stack Underflow\n"); else { printf("Popped element is : %d\n",stack_arr[top]); top=top-1; } }/*End of pop()*/ display() { int i; if(top == -1) printf("Stack is empty\n"); else { printf("Stack elements :\n"); for(i = top; i >=0; i--) printf("%d\n", stack_arr[i] ); } }/*End of display()*/

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

< Previous Page | 1 2