Search Results

Search found 58653 results on 2347 pages for 'transactional data'.

Page 892/2347 | < Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >

  • splitting strings in php

    - by JPro
    I have some testcases/strings in this format: o201_01_01a_Testing_to_see_If_this_testcases_passes:without_data o201_01_01b_Testing_to_see_If_this_testcases_passes:data rx01_01_03d_Testing_the_reconfiguration/Retest: Actually this testcase name consists of the actual name and the description. So, I want to split them like this : o201_01_01a Testing_to_see_If_this_testcases_passes:without_data o201_01_01b Testing_to_see_If_this_testcases_passes:data rx01_01_03d Testing_the_reconfiguration/Retest: I am unable to figure out the exact way to do this in explode in php Can anyone help please? Thanks.

    Read the article

  • Gantt Chart Via Using Sharepoint

    - by Gayan J
    Hay.. i Need A Help For Creating A Share Point simple web Site And Add Gantt Chart. The Main thing hear Was to Update That Gantt Chat Via Using data Base.Main Thing Hear Was,Need To Draw A gantt chart by using Data Base. Do you Have Any Idea About That????

    Read the article

  • checksum in raw sockets and pcap [closed]

    - by hero
    i am using pcap library to sniff some packets, change their tcp data , and then inject my packet on the network. my question is: if i changed in the tcp data, should i recalculate the length field in the tcp header? should i also change the checksum? i read in a page on how to create raw sockets that if you set the tcp_checksum to 0, the kernel will automatically calculate it and fill it, is this true for windows machines also?

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Access Insert Query

    - by tecno
    Hi, I am using C# to write/read to an Access 2007 Database. The table is ID - AutoNumber [pkey] Fname - Text Lname - Text Address - Text The query string I Use is "Insert into TblMain (Fname,Lname,Address) Values ('"+fname+"','"+lname+"','"+adrs+"')" No errors are returned, the query executes but data is not added to the db. Inserting to table using which does not have an autonumber data column works perfectly. What am I missing?

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • Reading MP3 files

    - by Midas
    I want to read MP3 files in C++ and I prefer to write my own code for this. Basically to learn how the filetype works. I want to read all the bits of hex data of a MP3 file and have my speakers play it. :) I have no idea where to start since I don't yet know how data is actually stored into a MP3 file. Thanks for your help

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • Strange 400 error with IIS 7.5 and a webservice?

    - by Juw
    Ok, this is a longshot. I have been pondering this for hours. I have no clue how to solve this. But maybe someone here can recognize the problem and point me to right direction. I have an IIS 7.5 server and a MSSQL database on a different server. On the IIS server there is a webservice that communicates with the MSSQL server. The problem is that when there is data that the MSSQL server needs to send back to the webservice and the webservice delivers that back to the webbrowser (JSON) i get a 400 error. Looking through the logs for the IIS there is just a 400....nothing more. When i put in a call to the service in my browsers URL field i get this: "The server encountered an error processing the request. Please see the service help page for constructing valid requests to the service." There is NOTHING wrong with how i call the webservice. It has worked before on a different server (a dev server). Do someone have a clue on what this can be about? 400 means malformed URL...it isn´t. And why is that when there are no data to return to the user...everything works. But when there is data fetched from the MSSQL DB...the 400 error shows up. Hope someone have some tips how to solve it. Thanx in advance.

    Read the article

  • DocumentFragment not appending in IE

    - by bmwbzz
    I have a select list which, when changed, pulls data via ajax and dynamically creates select lists. Then based on the data used to create the select lists (a type), I pull more data via ajax if i don't have it already and create the options for the select list and store them in a fragment. Then I append that fragment to the select list. This is zippy in FF3 and Chrome but either doesn't append the options at all or takes a long time (minutes) to append the options in IE7. Note: I am also using jQuery. code from the success callback which creates the select lists: blockDiv.empty(); var contentItemTypes = new Array(); selectLists = new Array(); for (var post in msg) { if (post != undefined) { var div = fragment.cloneNode(true); //deep copy var nameDiv = $(div.firstChild); nameDiv.text(msg[post].Name); blockDiv[0].appendChild(div); var allSelectLists = blockDiv.find('.editor-field select'); var selectList = $(allSelectLists[allSelectLists.length - 1]); var blockId = msg[post].ID; var elId = 'PageContentItem.' + blockId; selectList.attr('id', elId); selectList.attr('name', elId); var contentItemTypeId = msg[post].ContentItemTypeId; selectList.attr('cit', contentItemTypeId); if (contentItems[contentItemTypeId] != null || contentItems[contentItemTypeId] != undefined) { contentItems[contentItemTypeId] = null; } selectLists[post] = selectList; } } var firstContentTypeId = selectLists[0].attr('cit'); getContentItems(firstContentTypeId, setContentItemsForList, 0); code to get the items for the options in the select lists. function getContentItems(contentTypeId, callback, callbackParam) { if (contentItems[contentTypeId] != null || contentItems[contentTypeId] != undefined) { callback(contentTypeId, callbackParam); return; } contentItems[contentTypeId] = document.createDocumentFragment(); Q.ajax({ type: "POST", url: "/CMS/ContentItem/ListByContentType/" + contentTypeId, data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", error: function(xhr, msg, e) { var err = eval("(" + xhr.responseText + ")"); alert(err.ExceptionType + " ***** " + err.Message + " ***** " + err.StackTrace); }, success: function(msg) { var li; for (var post in msg) { if (post != undefined) { li = $('<option value="' + msg[post].ID + '">' + msg[post].Description + '</option>'); contentItems[contentTypeId].appendChild(li[0]); } } callback(contentTypeId, callbackParam); } }); } function setContentItemsForList(contentTypeId, selectIndex) { if (selectIndex < selectLists.length) { var items = contentItems[contentTypeId].cloneNode(true); selectLists[selectIndex].append($(items.childNodes)); selectIndex++; if (selectIndex < selectLists.length) { var nextContentTypeId = selectLists[selectIndex].attr('cit'); getContentItems(nextContentTypeId, setContentItemsForList, selectIndex); } } }

    Read the article

  • How can I track an asp.net pages size without tracing?

    - by Middletone
    I want to be able to track the amount of data that is being transfered from my web site to each user that accesses the site. I can do this for file downloads and such but what about the pure html content itself. How can I track the output size of a page (or the data that's trasnfered via an AJAX call) to the client and log it against a particular users session? Also how would this differ when GZip is used in IIS 6.0?

    Read the article

  • Win2008: Boot from mirrored dynamic disk fails!

    - by Daniel Marschall
    Hello. I am using Windows Server 2008 R2 Datacenter and I got two 1.5TB S-ATA2 hard disks installed and I want to make a soft raid. (I do know the disadvantages of softraid vs. hardraid) I have following partitions on Disk 0: (1) Microsoft Reserved 100 MB (dynamic), created during setup (2) System Partition 100 GB (dynamic) (3) Data partition, 1.2TB (dynamic) I already mirrored these contents to Disk 1. Its contents are: (1) System partition mirror, 100 GB (dynamic) (2) Data partition, 1.2 TB mirror (dynamic) (3) Unusued 100 MB (dynamic) -- is from "MSR" of Disk 0, created during setup. Since data and system partition are mirrored, I expect that my system works if disk 0 would fail. But it doesn't. If I force booting on disk 0: Works (I get the 2 bootloader screen) If I force booting on disk 1 (F8 for BBS), nothing happens. I got a blank black screen with the blinking caret. I already made disk1/partition1 active with diskpart, but it still does not boot from this drive. Please help. Both partitions are in "MBR" partition style. They look equal, except the missing "MSR" partition at the partition beginning (which seems to be not relevant to booting). Regards Daniel Marschall

    Read the article

  • How to solve Memory leaks in Lib Xml Parser in objective-C where the list is returned?

    - by Madan Mohan
    Hi Guys, I got leaks in Lib Xml Parser while retrieving the data from the net, Here in the below code, I have allocated the list - (void)getCustomersList { // make an operation so we can push it into the queue SEL method = @selector(parseForData); NSInvocationOperation *op = [[NSInvocationOperation alloc] initWithTarget:self selector:method object:nil]; customersTempList = [[NSMutableArray alloc] initWithCapacity:20];// allocated list [self.retrieverQueue addOperation:op]; [op release]; } // return each recode // in parser .m class one of the condition in endElement where it shows a leak. else if(0 == strncmp((const char *)localname, kCustomerElement, kCustomerElementLength)) { [customersTempList addObject:customer]; printf("\n no of objects in temp list:%d", [customersTempList count]); if ([customersTempList count] == 20) { NSMutableArray* argsList = [customersTempList copy];//////////////////////here it is showing leak. printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } // returned the list after all the records are stored in the list else if(0 == strncmp((const char *)localname, kCustomersElement, kCustomersElementLength)) { printf("\n Calling reload data with %d new objects", [customersTempList count]); NSMutableArray* argsList = [customersTempList copy]; printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } please help me out of this, Thanks, Madan.

    Read the article

  • Version of SQLite used in Android?

    - by Eno
    Reason: Im wondering how to handle schema migrations. The newer SQLite versions support an "ALTER TABLE" SQL command which would save me having to copy data, drop the table, recreate table and re-insert data.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • GUI wxHaskell database table in grid

    - by snorlaks
    Hello, I'm writing GUI app in haskell. I use sqlite to gather data and want to use grid in wxHASKELL TO present data in app. Everything is all right but I want to have possibility to add rows from haskell app. I achieved that but when I refresh it It expands and covers other controls. To make things short how can I set maximum size of control, and if there are more rows in that grid then make scroll appear ? thanks for any help, bye

    Read the article

  • Yes, another thread question...

    - by Michael
    I can't understand why I am loosing control of my GUI even though I am implementing a thread to play a .wav file. Can someone pin point what is incorrect? #!/usr/bin/env python import wx, pyaudio, wave, easygui, thread, time, os, sys, traceback, threading import wx.lib.delayedresult as inbg isPaused = False isStopped = False class Frame(wx.Frame): def __init__(self): print 'Frame' wx.Frame.__init__(self, parent=None, id=-1, title="Jasmine", size=(720, 300)) #initialize panel panel = wx.Panel(self, -1) #initialize grid bag sizer = wx.GridBagSizer(hgap=20, vgap=20) #initialize buttons exitButton = wx.Button(panel, wx.ID_ANY, "Exit") pauseButton = wx.Button(panel, wx.ID_ANY, 'Pause') prevButton = wx.Button(panel, wx.ID_ANY, 'Prev') nextButton = wx.Button(panel, wx.ID_ANY, 'Next') stopButton = wx.Button(panel, wx.ID_ANY, 'Stop') #add widgets to sizer sizer.Add(pauseButton, pos=(1,10)) sizer.Add(prevButton, pos=(1,11)) sizer.Add(nextButton, pos=(1,12)) sizer.Add(stopButton, pos=(1,13)) sizer.Add(exitButton, pos=(5,13)) #initialize song time gauge #timeGauge = wx.Gauge(panel, 20) #sizer.Add(timeGauge, pos=(3,10), span=(0, 0)) #initialize menuFile widget menuFile = wx.Menu() menuFile.Append(0, "L&oad") menuFile.Append(1, "E&xit") menuBar = wx.MenuBar() menuBar.Append(menuFile, "&File") menuAbout = wx.Menu() menuAbout.Append(2, "A&bout...") menuAbout.AppendSeparator() menuBar.Append(menuAbout, "Help") self.SetMenuBar(menuBar) self.CreateStatusBar() self.SetStatusText("Welcome to Jasime!") #place sizer on panel panel.SetSizer(sizer) #initialize icon self.cd_image = wx.Image('cd_icon.png', wx.BITMAP_TYPE_PNG) self.temp = self.cd_image.ConvertToBitmap() self.size = self.temp.GetWidth(), self.temp.GetHeight() wx.StaticBitmap(parent=panel, bitmap=self.temp) #set binding self.Bind(wx.EVT_BUTTON, self.OnQuit, id=exitButton.GetId()) self.Bind(wx.EVT_BUTTON, self.pause, id=pauseButton.GetId()) self.Bind(wx.EVT_BUTTON, self.stop, id=stopButton.GetId()) self.Bind(wx.EVT_MENU, self.loadFile, id=0) self.Bind(wx.EVT_MENU, self.OnQuit, id=1) self.Bind(wx.EVT_MENU, self.OnAbout, id=2) #Load file usiing FileDialog, and create a thread for user control while running the file def loadFile(self, event): foo = wx.FileDialog(self, message="Open a .wav file...", defaultDir=os.getcwd(), defaultFile="", style=wx.FD_MULTIPLE) foo.ShowModal() self.queue = foo.GetPaths() self.threadID = 1 while len(self.queue) != 0: self.song = myThread(self.threadID, self.queue[0]) self.song.start() while self.song.isAlive(): time.sleep(2) self.queue.pop(0) self.threadID += 1 def OnQuit(self, event): self.Close() def OnAbout(self, event): wx.MessageBox("This is a great cup of tea.", "About Jasmine", wx.OK | wx.ICON_INFORMATION, self) def pause(self, event): global isPaused isPaused = not isPaused def stop(self, event): global isStopped isStopped = not isStopped class myThread (threading.Thread): def __init__(self, threadID, wf): self.threadID = threadID self.wf = wf threading.Thread.__init__(self) def run(self): global isPaused global isStopped self.waveFile = wave.open(self.wf, 'rb') #initialize stream self.p = pyaudio.PyAudio() self.stream = self.p.open(format = self.p.get_format_from_width(self.waveFile.getsampwidth()), channels = self.waveFile.getnchannels(), rate = self.waveFile.getframerate(), output = True) self.data = self.waveFile.readframes(1024) isPaused = False isStopped = False #main play loop, with pause event checking while self.data != '': # while isPaused != True: # if isStopped == False: self.stream.write(self.data) self.data = self.waveFile.readframes(1024) # elif isStopped == True: # self.stream.close() # self.p.terminate() self.stream.close() self.p.terminate() class App(wx.App): def OnInit(self): self.frame = Frame() self.frame.Show() self.SetTopWindow(self.frame) return True def main(): app = App() app.MainLoop() if __name__=='__main__': main()

    Read the article

  • SQL Server combining 2 rows into 1 from the same table

    - by Maton
    Hi, I have a table with an JobID (PK), EmployeeID (FK), StartDate, EndDate containing data such as: 1, 10, '01-Jan-2010 08:00:00', '01-Jan-2010 08:30:00' 2, 10, '01-Jan-2010 08:50:00', '01-Jan-2010 09:05:00' 3, 10, '02-Feb-2010 10:00:00', '02-Feb-2010 10:30:00' I want to return a record for each EndDate for a Job and then the same employees StartDate for his next immediate job (by date time). So from the data above the result would be Result 1: 10, 01-Jan-2010 08:30:00, 01-Jan-2010 08:50:00 Result 2: 10, 01-Jan-2010 09:05:00, 02-Feb-2010 10:00:00 Greatly appreciate any help!

    Read the article

  • Recursion Question : Revision

    - by stan
    My slides say that: A recursive call should always be on a smaller data structure than the current one There must be a non recursive option if the data structure is too small You need a wrapper method to make the recursive method accessible Just reading this from the slides makes no sense, especially seeing as it was a topic from before christmas! Could anyone try and clear up what it means please? Thank you

    Read the article

  • How can this PHP/FQL code be modified to increase the performance and usability?

    - by Kaoukkos
    I try to get some insights from the pages I am administrator on Facebook. What my code does, it gets the IDs of the pages I want to work with through mySQL. I did not include that part though.After this, I get the page_id, name and fan_count of each of those facebook IDs and are saved in fancounts[]. Using the IDs ( pages[] ) I get two messages max from each page. There may be no messages, there may be 1 or 2 messages max. Possibly I will increase it later. messages[] holds the messages of each page. I have two problems with it. It has a very slow performance I can't find a way to echo the data like this: ID - Name of the page - Fan Count Here goes the first message Here goes the second one //here is a break ID - Name of the page 2 - Fan Count Here goes the first message of page 2 Here goes the second one of page 2 My questions are, how can the code be modified to increase performance and show the data as above? I read about fql.multiquery. Can it be used here? Please provide me with code examples. Thank you $pages = array(); // I get the IDs I want to work with $pagesIds = implode(',', $pages); // fancounts[] holds the page_id, name and fan_count of the Ids I work with $fancounts = array(); $pagesFanCounts = $facebook->api("/fql", array( "q" => "SELECT page_id, name, fan_count FROM page WHERE page_id IN ({$pagesIds})" )); foreach ($pagesFanCounts['data'] as $page){ $fancounts[] = $page['page_id']."-".$page['name']."-".$page['fan_count']; } //messages[] holds from 0 to 2 messages from each of the above pages $messages = array(); foreach( $pages as $id) { $getMessages = $facebook->api("/fql", array( "q" => "SELECT message FROM stream WHERE source_id = '$id' LIMIT 2" )); $messages[] = $getMessages['data']; } // this is how I print them now but it does not give me the best output. ( thanks goes to Mark for providing me this code ) $count = min(count($fancounts),count($messages)); for($i=0; $i<$count; ++$i) { echo $fancounts[$i],'<br>'; foreach($messages[$i] as $msg) { echo $msg['message'],'<br>'; } }

    Read the article

  • Compose synthetic English phrase that would contain 160 bits of recoverable information

    - by Alexander Gladysh
    I have 160 bits of random data. Just for fun, I want to generate pseudo-English phrase to "store" this information in. I want to be able to recover this information from the phrase. Note: This is not a security question, I don't care if someone else will be able to recover the information or even detect that it is there or not. Criteria for better phrases, from most important to the least: Short Unique Natural-looking The current approach, suggested here: Take three lists of 1024 nouns, verbs and adjectives each (picking most popular ones). Generate a phrase by the following pattern, reading 20 bits for each word: Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb. Now, this seems to be a good approach, but the phrase is a bit too long and a bit too dull. I have found a corpus of words here (Part of Speech Database). After some ad-hoc filtering, I calculated that this corpus contains, approximately 50690 usable adjectives 123585 nouns 15301 verbs This allows me to use up to 16 bits per adjective (actually 16.9, but I can't figure how to use fractional bits) 15 bits per noun 13 bits per verb For noun-verb-adjective-verb pattern this gives 57 bits per "sentence" in phrase. This means that, if I'll use all words I can get from this corpus, I can generate three sentences instead of four (160 / 57 ˜ 2.8). Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb. Still a bit too long and dull. Any hints how can I improve it? What I see that I can try: Try to compress my data somehow before encoding. But since the data is completely random, only some phrases would be shorter (and, I guess, not by much). Improve phrase pattern, so it would look better. Use several patterns, using the first word in phrase to somehow indicate for future decoding which pattern was used. (For example, use the last letter or even the length of the word.) Pick pattern according to the first bytes of the data. ...I'm not that good with English to come up with better phrase patterns. Any suggestions? Use more linguistics in the pattern. Different tenses etc. ...I guess, I would need much better word corpus than I have now for that. Any hints where can I get a suitable one?

    Read the article

< Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >