Search Results

Search found 23480 results on 940 pages for '32 bit'.

Page 237/940 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • How to populate spinner from mySql database

    - by user1472085
    How do i populate this,I have 2 Spinners in my app and one is for Province and other is for City, when i select Gauteng in province spinner i want my second spinner to show Johannesburg,Pretoria,Centurion and If i select KZN in province i want my second spinner to show Maritsburg and Durban. im using mySql database and i have Province table and City table. Here is My Province table(idprovince,provincename) 1 Gauteng 2 Free-State 3 Limpopo 4 Northen-Cape 5 North-West 6 Western-Cape 7 Eastern-Cape 8 Kwa-Zulu-Natal 9 Mpumalanga and Here is my city table(idcity,cityname,province_idprovince) 1 Johannesburg 1 2 pretoria 1 3 Centurion 1 4 Bloemfontein 2 5 Welkom 2 6 Polokwane 3 7 Phalaborwa 3 8 Potgiersrus 3 9 Tzaneen 3 10 Kimberley 4 11 Upington 4 12 Mafikeng 5 13 Klerksdorp 5 14 Mmabatho 5 15 Potchefstroom 5 16 Brits 5 17 Cape-Town 6 18 Stellenbosch 6 19 George 6 20 Saldhana-Bay 6 21 Bisho 7 22 Port-Elizabeth 7 23 East-London 7 24 Pietermaritzburg 8 25 Durban 8 26 Ulundi 8 27 Richards-Bay 8 28 Newcastle 8 29 Nelspruit 9 30 Witbank 9 31 Middleburg 9 32 Ermelo 9 i will appriciate your help.

    Read the article

  • How to determine OS Platform with WMI?

    - by cary.wagner
    I am trying to figure out if there is a location in WMI that will return the OS Architecture (i.e. 32-bit or 64-bit) that will work across "all" versions of Windows. I thought I had figured it out looking at my Win2k8 system when I found the following: Win32_OperatingSystem / OSArchitecture I was wrong. It doesn't appear that this field exists on Win2k3 systems. Argh! So, is anyone aware of another field in WMI that "is" the same across server versions? If not, what about a registry key that is the same? I am using a tool that only allows me to configure simple field queries, so I cannot use a complex script to perform. Any help would be greatly appreciated! Cheers... Cary

    Read the article

  • 'whatever' has no declared type

    - by mihirpmehta
    i am developing parser using bison...in my grammar i am getting this error Here is a code extern NodePtr CreateNode(NodeType, ...); extern NodePtr ReplaceNode(NodeType, NodePtr); extern NodePtr MergeSubTrees(NodeType, ...); ................... NodePtr rootNodePtr = NULL; /* pointer to the root of the parse tree */ NodePtr nodePtr = NULL; /* pointer to an error node */ ........................... NodePtr mainMethodDecNodePtr = NULL; ................ /* YYSTYPE */ %union { NodePtr nodePtr; } i am getting this error whenever i use like $$.nodePtr or $1.nodePtr ... I am getting Parser.y:1302.32-33: $1 of `Expressi on' has no declared type

    Read the article

  • Ret Failure with SDL using FASM on Win32

    - by Jon Purdy
    I'm using SDL with FASM, and have code that's minimally like the following: format ELF extrn _SDL_Init extrn _SDL_SetVideoMode extrn _SDL_Quit extrn _exit SDL_INIT_VIDEO equ 0x00000020 section '.text' public _SDL_main _SDL_main: ccall _SDL_Init, SDL_INIT_VIDEO ccall _SDL_SetVideoMode, 640, 480, 32, 0 ccall _SDL_Quit ccall _exit, 0 ; Success, or ret ; failure. With the following quick-and-dirty makefile: SOURCES = main.asm OBJECTS = main.o TARGET = SDLASM.exe FASM = C:\fasm\fasm.exe release : $(OBJECTS) ld $(OBJECTS) -LC:/SDL/lib/ -lSDLmain -lSDL -LC:/MinGW/lib/ -lmingw32 -lcrtdll -o $(TARGET) --subsystem windows cleanrelease : del $(OBJECTS) %.o : %.asm $(FASM) $< $@ Using exit() (or Windows' ExitProcess()) seems to be the only way to get this program to exit cleanly, even though I feel like I should be able to use retn/retf. When I just ret without calling exit(), the application does not terminate and needs to be killed. Could anyone shed some light on this? It only happens when I make the call to SDL_SetVideoMode().

    Read the article

  • Multiple levels of 'collection.defaultdict' in Python

    - by Morlock
    Thanks to some great folks on SO, I discovered the possibilities offered by collections.defaultdict, notably in readability and speed. I have put them to use with success. Now I would like to implement three levels of dictionaries, the two top ones being defaultdict and the lowest one being int. I don't find the appropriate way to do this. Here is my attempt: from collections import defaultdict d = defaultdict(defaultdict) a = [("key1", {"a1":22, "a2":33}), ("key2", {"a1":32, "a2":55}), ("key3", {"a1":43, "a2":44})] for i in a: d[i[0]] = i[1] Now this works, but the following, which is the desired behavior, doesn't: d["key4"]["a1"] + 1 I suspect that I should have declared somewhere that the second level defaultdict is of type int, but I didn't find where or how to do so. The reason I am using defaultdict in the first place is to avoid having to initialize the dictionary for each new key. Any more elegant suggestion? Thanks pythoneers!

    Read the article

  • Using ret with FASM on Win32

    - by Jon Purdy
    I'm using SDL with FASM, and have code that's minimally like the following: format ELF extrn _SDL_Init extrn _SDL_SetVideoMode extrn _SDL_Quit extrn _exit SDL_INIT_VIDEO equ 0x00000020 section '.text' public _SDL_main _SDL_main: ccall _SDL_Init, SDL_INIT_VIDEO ccall _SDL_SetVideoMode, 640, 480, 32, 0 ccall _SDL_Quit ccall _exit, 0 ; Success, or ret ; failure. With the following quick-and-dirty makefile: SOURCES = main.asm OBJECTS = main.o TARGET = SDLASM.exe FASM = C:\fasm\fasm.exe release : $(OBJECTS) ld $(OBJECTS) -LC:/SDL/lib/ -lSDLmain -lSDL -LC:/MinGW/lib/ -lmingw32 -lcrtdll -o $(TARGET) --subsystem windows cleanrelease : del $(OBJECTS) %.o : %.asm $(FASM) $< $@ Using exit() (or Windows' ExitProcess()) seems to be the only way to get this program to exit cleanly, even though I feel like I should be able to use retn/retf. When I just ret without calling exit(), the application does not terminate and needs to be killed. Could anyone shed some light on this? It only happens when I make the call to SDL_SetVideoMode().

    Read the article

  • How do you measure latency in low-latency environments?

    - by Ajaxx
    Here's the setup... Your system is receiving a stream of data that contains discrete messages (usually between 32-128 bytes per message). As part of your processing pipeline, each message passes through two physically separate applications which exchange the data using a low-latency approach (such as messaging over UDP) or RDMA and finally to a client via the same mechanism. Assuming you can inject yourself at any level, including wire protocol analysis, what tools and/or techniques would you use to measure the latency of your system. As part of this, I'm assuming that every message that is delivered to the system results in a corresponding (though not equivalent) message being pushed through the system and delivered to the client. The only tool that I've seen on the market like this is TS-Associates TipOff. I'm sure that with the right access you could probably measure the same information using a wire analysis tool (ala wireshark) and the right dissectors, but is this the right approach or are there any commodity solutions that I can use?

    Read the article

  • In C# should I use uint or int for values that are never supposed to be negative?

    - by Hamish Grubijan
    Suppose that the MaxValue of (roughly :) ) 2^31 vs 2^32 does not matter. On one hand, using uint seems nice because it is self-explanatory, it indicates (and promises?) that some value may never be negative. However, int is more common, and a cast is often inconvenient. One can just use int and always supplement it with code contracts (everyone has moved to .Net 4.0 by now, right?) Standard libraries do use int for Length and Size properties, even though those should never be negative. So, is it obvious to you that int is better than uint most of the time, or is it more complicated? Please ask questions if you find that this question is not clearly stated. Thanks.

    Read the article

  • How to overlay one div over another div

    - by tonsils
    Hi, Hoping someone can assist but I need assistance with overlaying one individual div over another individual div, i.e. my code looks like this: <div class="navi"></div> <div id="infoi"><img src="info_icon2.png" height="20" width="32"/></div> Unfortunately I cannot nest the infoi div or img inside the first div (class navi) - it has to be two separate divs as shown but I need to know how I could place the img div over the first div (navi) and to the right most side and centered on top of the navi div. Would appreciate any help is achieving this (hopefully it's possible) Thanks.

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Why will this code compile using ifort compiler and not when using gfortran compiler? Help!

    - by CuriousCompiler
    I'm rewriting some code to make a program compile with the gfortran compiler as opposed to ifort compiler I usually use. The code follows: _Subroutine SlideBits (WORD, BITS, ADDR) Implicit None Integer(4) WORD Integer(4) BITS Integer(4) ADDR Integer(4) ADDR1 ADDR1 = 32 - ADDR WORD = (WORD .And. (.Not.ISHFT(1,ADDR1))) .Or. ISHFT(BITS,ADDR1) End_ When I compile the above code using the gfortran compiler, I recieve this error: WORD = (WORD .And. (.Not.ISHFT(1,ADDR1))) .Or. ISHFT(BITS,ADDR1) Error: Operand of .NOT. operator at (1) is INTEGER(4) All three of the variables coming into the subroutine are integers. I've looked around a bit and the gfortran wiki states that the gfortran compiler should be able to handle logical statments being applied to integer values. Several other sites I've visited either quote from the gnu wiki or agree with it. This is the first time I've seen this error as the Intel Fortran compiler (ifort) I normally use compiles cleanly.

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • Script for run script

    - by user280926
    Hello everybody I have a script on vbscript Dim WSHShell, WinDir, Value, wshProcEnv, fso, Spath Set WSHShell = CreateObject("WScript.Shell") Dim objFSO, objFileCopy Dim strFilePath, strDestination Const OverwriteExisting = True Set objFSO = CreateObject("Scripting.FileSystemObject") Set windir = objFSO.getspecialfolder(0) objFSO.CopyFile "\\dv.rt.ru\SYSVOL\DV.RT.RU\scripts\shutdown.vbs", windir&"\", OverwriteExisting strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") JobID = "1" Set colScheduledJobs = objWMIService.ExecQuery _ ("Select * from Win32_ScheduledJob") For Each objJob in colScheduledJobs objJob.Delete Next Set objNewJob = objWMIService.Get("Win32_ScheduledJob") errJobCreate = objNewJob.Create _ (windir & "\shutdown.vbs", "********093000.000000+660", _ True, 1 OR 2 OR 4 OR 8 OR 16 OR 32 OR 64, ,True, JobId) How make that shutdown.vbs not run once at 9:30 but run for 9:30 to 12:00

    Read the article

  • What's wrong in this iban validation code?

    - by Jackoder
    Hello coders, I'm working on a php iban validator but i have a problem I wrote like this: function IbanValidator($value) { $iban = false; $value= strtoupper(trim($value)); # Change US text into your country code if(preg_match('/^US\d{7}0[A-Z0-9]{16}$/', $value)) { $number= substr($value,4,22).'2927'.substr($value,2,2); $number= str_replace( array('A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'), array(10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35), $number ); $iban = (1 == bcmod($number,97)) ? true; } return $iban; } Thanks.

    Read the article

  • Adobe Livecycle performance

    - by Fabio
    I have installed Adobe Livecycle in order to convert MSWORD files to PDF from a web-app. Specifically, I use the DocConverter tool. Previously I have used OpenOffice UNO SDK, but I have found some problems with particular documents. Now, the conversion is ok, but the conversion time is huge. These are the times to convert documents of different sizes via Openoffice and via Livecycle. Could you suggest anything? SIZE (bytes) Openoffice (sec) Adobe LiveCycle (sec) 24064 1 8 50688 0 3 100864 0 3 253952 0 5 509440 1 5 1017856 5 18 2098688 8 10 4042240 19 45 6281216 0 9 8212480 32 125

    Read the article

  • How to generate GIR files from the Vala compiler?

    - by celil
    I am trying to create python bindings to a vala library using pygi with gobject introspection. However, I am having trouble generating the GIR files (that I am planning to compile to typelib files subsequently). According to the documentation valac should support generating GIR files. Compiling the following helloworld.vala public struct Point { public double x; public double y; } public class Person { public int age = 32; public Person(int age) { this.age = age; } } public int main() { var p = Point() { x=0.0, y=0.1 }; stdout.printf("%f %f\n", p.x, p.y); var per = new Person(22); stdout.printf("%d\n", per.age); return 0; } with the command valac helloworld.vala --gir=Hello-1.0.gir doesn't create the Hello-1.0.gir file as one would expect. How can I generate the gir file?

    Read the article

  • How to schedule daily backup in SQL Server 2008 Web Edition

    - by Xenon
    In SQL Server Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • How to register System.DirectoryServices for use in SQL CLR User Functions?

    - by Saul Dolgin
    I am porting an old 32-bit COM component that was written in VB6 for the purpose of reading and writing to an Active Directory server. The new solution will be in C# and will use SQL CLR user functions. The assembly that I am trying to deploy to SQL Server contains a reference to System.DirectoryServices. The project does compile without any errors but I am unable to deploy the assembly to the SQL Server because of the following error: Error: Assembly 'system.directoryservices, version=2.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a.' was not found in the SQL catalog. What are the correct steps for registering System.DirectoryServices on SQL Server?

    Read the article

  • Google Maps integration with JSON - CircularReferenceError

    - by JZ
    I'm working on Rails 3.0.0.beta2, following Advanced Rails Recipes "Recipe #32, Mark locations on a Google Map" and I hit a road block. The following code is returning a "ActiveSupport::JSON::Encoding::CircularReferenceError" "object references itself" at line 3. This code represents the /layouts/maps.html.erb file <% if @maps -%> <script type="text/javascript"> var maps = <%= @maps.to_json %>; </script> <% end -%> This is my first attempt at rendering JSON, and I don't know how to debug this problem. Do you have experience with this? What could cause this problem? Thank you in advance!

    Read the article

  • Java Math.cos() Method Does Not Return 0 When Expected

    - by dimo414
    Using Java on a Windows 7 PC (not sure if that matters) and calling Math.cos() on values that should return 0 (like pi/2) instead returns small values, but small values that, unless I'm misunderstanding, are much greater than 1 ulp off from zero. Math.cos(Math.PI/2) = 6.123233995736766E-17 Math.ulp(Math.cos(Math.PI/2)) = 1.232595164407831E-32 Is this in fact within 1 ulp and I'm simply confused? And would this be an acceptable wrapper method to resolve this minor inaccuracy? public static double cos(double a){ double temp = Math.abs(a % Math.PI); if(temp == Math.PI/2) return 0; return Math.cos(a); }

    Read the article

  • Custom Build Step Paths Between x86 and x64 in Visual Studio

    - by Bob Somers
    For reference, I'm using Visual Studio 2010. I have a custom build step defined as follows: if exist "$(TargetDir)"server.dll copy "$(TargetDir)"server.dll "c:\program files (x86)\myapp\server.dll" This works great on my desktop, which is running 64-bit Windows. However, when I build on my laptop, c:\Program Files (x86)\ doesn't exist because it's running 32-bit Windows. I'd like to put in something that will work between both editions of Windows, since the project files are under version control and it's a real pain to change the paths every time I work on my laptop. If this were a *nix environment I'd just create a symlink and be done with it. Any ideas?

    Read the article

  • XML parsing and transforming (XSLT or otherwise)

    - by observ
    I have several xml files that are formated this way: <ROOT> <OBJECT> <identity> <id>123</id> </identity> <child2 attr = "aa">32</child2> <child3> <childOfChild3 att1="aaa" att2="bbb" att3="CCC">LN</childOfChild3> </child3> <child4> <child5> <child6>3ddf</child6> <child7> <childOfChild7 att31="RR">1231</childOfChild7> </child7> </child5> </child4> </OBJECT> <OBJECT> <identity> <id>124</id> </identity> <child2 attr = "bb">212</child2> <child3> <childOfChild3 att1="ee" att2="ccc" att3="EREA">OP</childOfChild3> </child3> <child4> <child5> <child6>213r</child6> <child7> <childOfChild7 att31="EE">1233</childOfChild7> </child7> </child5> </child4> </OBJECT> </ROOT> How can i format it this way?: <ROOT> <OBJECT> <id>123</id> <child2>32</child2> <attr>aa</attr> <child3></child3> <childOfChild3>LN</childOfChild3> <att1>aaa</att1> <att2>bbb</att2> <att3>CCC</att3> <child4></child4> <child5></child5> <child6>3ddf</child6> <child7></child7> <childOfChild7>1231</childOfChild7> <att31>RR</att31> </OBJECT> <OBJECT> <id>124</id> <child2>212</child2> <attr>bb</attr> <child3></child3> <childOfChild3>LN</childOfChild3> <att1>ee</att1> <att2>ccc</att2> <att3>EREA</att3> <child4></child4> <child5></child5> <child6>213r</child6> <child7></child7> <childOfChild7>1233</childOfChild7> <att31>EE</att31> </OBJECT> </ROOT> I know some C# so maybe a parser there? or some generic xslt? The xml files are some data received from a client, so i can't control the way they are sending it to me. L.E. Basically when i am trying to test this data in excel (for example i want to make sure that the attribute of childOfChild7 corresponds to the correct identity id) i am getting a lot of blank spaces. If i am importing in access to get only the data i want out, i have to do a thousands subqueries to get them all in a nice table. Basically i just want to see for one Object all its data (one object - One row) and then just delete/hide the columns i don't need.

    Read the article

  • Finding text's bounding rect in Core Text

    - by Mo
    I'm trying to find the boundaries of a line of text in Core Text. For simplicity, assume it has a single character. At the moment I'm using the following method: line = CTLineCreateWithAttributedString(attrString); rect = CTLineGetImageBounds(line, context); It works most of the times, but for some characters, like math italic d (Unicode: 0x1D451) or math italic q (Unicode: 0x1D45E), the width is a bit short. I tried using CTLineGetTypographicBounds() or CTFramesetterSuggestFrameSizeWithConstraints, but they didn't help either (I think they use glyph's advance to find the width, not its graphical width.) As the font itself isn't italic, I also can't use slant angle to correct this. I tried accessing the glyphs directly and using CTFontCreatePathForGlyph(), but failed as CGGlyph and UniChar are both 16-bits and I need 32-bit characters. Does anyone know if I'm doing anything wrong? If so, what's the right way?

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >