Search Results

Search found 314 results on 13 pages for 'mad hatter'.

Page 12/13 | < Previous Page | 8 9 10 11 12 13  | Next Page >

  • How to fix these compiler errors?

    - by Sandra Schlichting
    I have this source code from 2001 that I would like to compile. It gives this: $ make g++ -O99 -Wall -DLINUX -pedantic -c -o audio.o audio.cpp In file included from audio.cpp:7: audio.h:14: error: use of enum ‘mad_flow’ without previous declaration audio.h:15: error: use of enum ‘mad_flow’ without previous declaration audio.h:17: error: use of enum ‘mad_flow’ without previous declaration audio.cpp: In function ‘mad_flow audio::input(void*, mad_stream*)’: audio.cpp:19: error: new declaration ‘mad_flow audio::input(void*, mad_stream*)’ audio.h:14: error: ambiguates old declaration ‘int audio::input(void*, mad_stream*)’ audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:23: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:23: error: within this context audio.h:10: error: ‘char* audio::stream::Buffer’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:27: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:27: error: within this context audio.cpp: In function ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’: audio.cpp:49: error: new declaration ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’ audio.h:15: error: ambiguates old declaration ‘int audio::output(void*, const mad_header*, mad_pcm*)’ audio.cpp: In function ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’: audio.cpp:83: error: new declaration ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’ audio.h:17: error: ambiguates old declaration ‘int audio::error(void*, mad_stream*, mad_frame*)’ audio.cpp: In constructor ‘audio::stream::stream(const char*)’: audio.cpp:119: error: ‘input’ was not declared in this scope audio.cpp:122: error: ‘output’ was not declared in this scope audio.cpp:123: error: ‘error’ was not declared in this scope make: *** [audio.o] Error 1 audio.h contains #include <stdlib.h> #include "mad.h" namespace audio { class stream { private: char* Buffer; size_t BufferSize, BufferPos; struct mad_decoder Decoder; friend enum mad_flow input(void* Data, struct mad_stream* MadStream); friend enum mad_flow output(void* Data, const struct mad_header* Header, struct mad_pcm* PCM); friend enum mad_flow error(void* Data, struct mad_stream* MadStream, struct mad_frame* Frame); public: stream(const char* FileName); ~stream(); void play(); }; } I have tried to just insert enum mad_flow {}; but that just gave a new problem. Can anyone see how to fix this?

    Read the article

  • AutoIt scripts runs without error but I can't see archive?

    - by Scott
    #include <File.au3> #include <Zip.au3> ; bad file extensions Local $extData="ade|adp|app|asa|ashx|asp|bas|bat|cdx|cer|chm|class|cmd|com|cpl|crt|csh|der|exe|fxp|gadget|hlp|hta|htr|htw|ida|idc|idq|ins|isp|its|jse|ksh|lnk|mad|maf|mag|mam|maq|mar|mas|mat|mau|mav|maw|mda|mdb|mde|mdt|mdw|mdz|msc|msh|msh1|msh1xml|msh2|msh2xml|mshxml|msi|msp|mst|ops|pcd|pif|prf|prg|printer|pst|reg|rem|scf|scr|sct|shb|shs|shtm|shtml|soap|stm|url|vb|vbe|vbs|ws|wsc|wsf|wsh" Local $extensions = StringSplit($extData, "|") ; What is the root directory? $rootDirectory = InputBox("Root Directory", "Please enter the root directory...") archiveDir($rootDirectory) Func archiveDir($dir) $goDirs = True $goFiles = True ; Get all the files under the current dir $allOfDir = _FileListToArray($dir) Local $countDirs = 0 Local $countFiles = 0 $imax = UBound($allOfDir) For $i = 0 to $imax - 1 If StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D") Then $countDirs = $countDirs + 1 ElseIf StringInStr(($allOfDir[$i]),".") Then $countFiles = $countFiles + 1 EndIf Next MsgBox(0, "Value of $countDirs in " & $dir, $countDirs) MsgBox(0, "Value of $countFiles in " & $dir, $countFiles) If ($countDirs > 0) Then Local $allDirs[$countDirs] $goDirs = True Else $goDirs = False EndIf If ($countFiles > 0) Then Local $allFiles[$countFiles] $goFiles = True Else $goFiles = False EndIf $dirCount = 0 $fileCount = 0 For $i = 0 to $imax - 1 If (StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D")) And ($goDirs == True) Then $allDirs[$dirCount] = $allOfDir[$i] $dirCount = $dirCount + 1 ElseIf (StringInStr(($allOfDir[$i]),".")) And ($goFiles == True) Then $allFiles[$fileCount] = $allOfDir[$i] $fileCount = $fileCount + 1 EndIf Next ; Zip them if need be in current spot using 'ext_zip.zip' as file name, loop through each file ext. If ($goFiles == True) Then $emax = UBound($extensions) $fmax = UBound($allFiles) For $e = 0 to $emax - 1 For $f = 0 to $fmax - 1 $currentExt = getExt($allFiles[$f]) If ($currentExt == $extensions[$e]) Then $zip = _Zip_Create($dir & "\" & $currentExt & "_zip.zip") _Zip_AddFile($zip, $allFiles[$f]) EndIf Next Next EndIf ; Get all dirs under current DirCopy ; For each dir, recursive call from step 2 If ($goDirs == True) Then $dmax = UBound($allDirs) $rootDirectory = $rootDirectory & "\" For $d = 0 to $dmax - 1 archiveDir($rootDirectory & $allDirs[$d]) Next EndIf EndFunc Func getExt($filename) $pos = StringInStr($filename, ".") $retval = StringTrimLeft($filename, $pos + 1) Return $retval EndFunc This should output the .zip archives in the directories it finds the files that it needs to zip but it doesn't. Is there something I have to do after I create and add files to the archive within the code to put this created archive in the directory?

    Read the article

  • AutoIt scripts runs without error but I can't see archive? - UPDATE

    - by Scott
    #include <File.au3> #include <Zip.au3> #include <Array.au3> ; bad file extensions Local $extData="ade|adp|app|asa|ashx|asp|bas|bat|cdx|cer|chm|class|cmd|com|cpl|crt|csh|der|exe|fxp|gadget|hlp|hta|htr|htw|ida|idc|idq|ins|isp|its|jse|ksh|lnk|mad|maf|mag|mam|maq|mar|mas|mat|mau|mav|maw|mda|mdb|mde|mdt|mdw|mdz|msc|msh|msh1|msh1xml|msh2|msh2xml|mshxml|msi|msp|mst|ops|pcd|pif|prf|prg|printer|pst|reg|rem|scf|scr|sct|shb|shs|shtm|shtml|soap|stm|url|vb|vbe|vbs|ws|wsc|wsf|wsh" Local $extensions = StringSplit($extData, "|") ; What is the root directory? $rootDirectory = InputBox("Root Directory", "Please enter the root directory...") archiveDir($rootDirectory) Func archiveDir($dir) $goDirs = True $goFiles = True ; Get all the files under the current dir $allOfDir = _FileListToArray($dir) $tmax = UBound($allOfDir) For $t = 0 to $tmax - 1 Next Local $countDirs = 0 Local $countFiles = 0 $imax = UBound($allOfDir) For $i = 0 to $imax - 1 If StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D") Then $countDirs = $countDirs + 1 ElseIf StringInStr(($allOfDir[$i]),".") Then $countFiles = $countFiles + 1 EndIf Next If ($countDirs > 0) Then Local $allDirs[$countDirs] $goDirs = True Else $goDirs = False EndIf If ($countFiles > 0) Then Local $allFiles[$countFiles] $goFiles = True Else $goFiles = False EndIf $dirCount = 0 $fileCount = 0 For $i = 0 to $imax - 1 If (StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D")) And ($goDirs == True) Then $allDirs[$dirCount] = $allOfDir[$i] $dirCount = $dirCount + 1 ElseIf (StringInStr(($allOfDir[$i]),".")) And ($goFiles == True) Then $allFiles[$fileCount] = $allOfDir[$i] $fileCount = $fileCount + 1 EndIf Next ; Zip them if need be in current spot using 'ext_zip.zip' as file name, loop through each file ext. If ($goFiles == True) Then $fmax = UBound($allFiles) For $f = 0 to $fmax - 1 $currentExt = getExt($allFiles[$f]) $position = _ArraySearch($extensions, $currentExt) If @error Then MsgBox(0, "Not Found", "Not Found") Else $zip = _Zip_Create($dir & "\" & $currentExt & "_zip.zip") _Zip_AddFile($zip, $dir & "\" & $allFiles[$f]) EndIf Next EndIf ; Get all dirs under current DirCopy ; For each dir, recursive call from step 2 If ($goDirs == True) Then $dmax = UBound($allDirs) $rootDirectory = $rootDirectory & "\" For $d = 0 to $dmax - 1 archiveDir($rootDirectory & $allDirs[$d]) Next EndIf EndFunc Func getExt($filename) $pos = StringInStr($filename, ".") $retval = StringTrimLeft($filename, $pos - 1) Return $retval EndFunc Updated, fixed a lot of bugs. Still not working. Like I said I have a list of 'bad' file extensions, this script should go through a directory of files (and subdirectories), and zip up (in separate zip files for each bad extension), all files WITH those bad extensions in the directories it finds them. What is wrong???

    Read the article

  • GCC: Simple inheritance test fails

    - by knight666
    I'm building an open source 2D game engine called YoghurtGum. Right now I'm working on the Android port, using the NDK provided by Google. I was going mad because of the errors I was getting in my application, so I made a simple test program: class Base { public: Base() { } virtual ~Base() { } }; // class Base class Vehicle : virtual public Base { public: Vehicle() : Base() { } ~Vehicle() { } }; // class Vehicle class Car : public Vehicle { public: Car() : Base(), Vehicle() { } ~Car() { } }; // class Car int main(int a_Data, char** argv) { Car* stupid = new Car(); return 0; } Seems easy enough, right? Here's how I compile it, which is the same way I compile the rest of my code: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-g++ -g -std=c99 -Wall -Werror -O2 -w -shared -fshort-enums -I ../../YoghurtGum/src/GLES -I ../../YoghurtGum/src -I /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/include -c src/Inheritance.cpp -o intermediate/Inheritance.o (Line breaks are added for clarity). This compiles fine. But then we get to the linker: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-gcc -lstdc++ -Wl, --entry=main, -rpath-link=/system/lib, -rpath-link=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -dynamic-linker=/system/bin/linker, -L/home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/lib/gcc/arm-eabi/4.4.0, -L/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -rpath=../../YoghurtGum/lib/GLES -nostdlib -lm -lc -lGLESv1_CM -z /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtbegin_dynamic.o /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtend_android.o intermediate/Inheritance.o ../../YoghurtGum/bin/YoghurtGum.a -o bin/Galaxians.android As you can probably tell, there's a lot of cruft in there that isn't really needed. That's because it doesn't work. It fails with the following errors: intermediate/Inheritance.o:(.rodata._ZTI3Car[typeinfo for Car]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI7Vehicle[typeinfo for Vehicle]+0x0): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI4Base[typeinfo for Base]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1 These are the same errors I get from my actual application. If someone could explain to me where I went wrong in my test or what option or I forgot in my linker, I would be very, extremely grateful. Thanks in advance. UPDATE: When I make my destructors non-inlined, I get new and more exciting link errors: intermediate/Inheritance.o:(.rodata+0x78): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata+0x90): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata+0xb0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1

    Read the article

  • a4j:support within a rich:modalPanel

    - by Andy Deighton
    Hi all, I've hit a wall. I know the a4j and rich tags pretty well (I use Seam 2.2.0 and Richfaces 3.3.1). However, I'm trying to do something quite simple, but in a rich:modalPanel. It seems that rich:modalPanels do not allow Ajax events to be fired. Here's a simple breakdown: I have a h:selectOneMenu with some items in it and whose value is attached to a backing bean. Attached to that h:selectOneMenu is a a4j:support tag so that whenever the change event is fired, the backing bean should get updated. Truly simple stuff eh? However, when this h:selectOneMenu is in a rich:modalPanel the onchange event doesn't update the backing bean until the rich:modalPanel closes. I can confirm this because I'm running it in Eclipse debug mode and I have a breakpoint on the setter of the property that's hooked up to the h:selectOneMenu. This is driving me mad! This is vanilla stuff for Ajax, but rich:modalPanels don't seem to allow it. So, the question is: can I do Ajax stuff within a rich:modalPanel? I'm basically trying to use the rich:modalPanel as a form (I've tried a4j:form and h:form to no avail) that reacts to changes to the drop down (e.g. when the user changes the drop down, a certain part of the form should get reRendered). Am I trying to do something that's not possible? Here's a simplified version of the modalPanel: <rich:modalPanel id="quickAddPanel"> <div> <a4j:form id="quickAddPaymentForm" ajaxSubmit="true"> <s:decorate id="paymentTypeDecorator"> <a4j:region> <h:selectOneMenu id="paymentType" required="true" value="#{backingBean.paymentType}" tabindex="1"> <s:selectItems label="#{type.description}" noSelectionLabel="Please select..." value="#{incomingPaymentTypes}" var="type"/> <s:convertEnum/> <a4j:support ajaxSingle="true" event="onchange" eventsQueue="paymentQueue" immediate="true" limitToList="true" reRender="paymentTypeDecorator, paymentDetailsOutputPanel, quickAddPaymentForm"/> </h:selectOneMenu> </a4j:region> </s:decorate> </fieldset> <fieldset class="standard-form"> <div class="form-title">Payment details</div> <a4j:outputPanel id="paymentDetailsOutputPanel"> <h:outputText value="This should change whenever dropdown changes: #{backingBean.paymentType}"/> </a4j:outputPanel> </fieldset> </a4j:form> </div> </rich:modalPanel> Regards, Andy

    Read the article

  • If Then Statement Condition Being Ignored With Optimisations On

    - by Matma
    I think im going mad but can some show me what im missing, it must be some stupidly simple i just cant see the wood for the trees. BOTH side of this if then else statement are being executed? Ive tried commenting out the true side and moving the condition to a seperate variable with the same result. However if i explicitly set the condition to 1=0 or 1=1 then the if then statement is operating as i would expect. i.e. only executing one side of the equation... The only time ive seen this sort of thing is when the compiler has crashed and is no longer compiling (without visible indication that its not) but ive restarted studio with the same results, ive cleaned the solution, built and rebuilt with no change? please show me the stupid mistake im making using vs2005 if it matters. Dim dset As DataSet = New DataSet If (CboCustomers.SelectedValue IsNot Nothing) AndAlso (CboCustomers.SelectedValue <> "") Then Dim Sql As String = "Select sal.SalesOrderNo As SalesOrder,cus.CustomerName,has.SerialNo, convert(varchar,sal.Dateofpurchase,103) as Date from [dbo].[Customer_Table] as cus " & _ " inner join [dbo].[Hasp_table] as has on has.CustomerID=cus.CustomerTag " & _ " inner join [dbo].[salesorder_table] as sal On sal.Hasp_ID =has.Hasp_ID Where cus.CustomerTag = '" & CboCustomers.SelectedValue.ToString & "'" Dim dap As SqlDataAdapter = New SqlDataAdapter(Sql, FormConnection) dap.Fill(dset, "dbo.Customer_Table") DGCustomer.DataSource = dset.Tables("dbo.Customer_Table") Else Dim erm As String = "wtf" End If EDIT: i have found that this is something to do with the release config settings im using, i guesing its the optimisations bit. does anyone know of any utils/addons for vs that show if a line has been optimised out. delphi, my former language showed blue dots in the left margin to show that it was a compiled line, no dot meaning it wasnt compiled in, is there anything like that for vs? alternatively can someone explain how optimisations would affect this simple if statement causeing it to run both sides? EDIT2: using this thread as possible causes/solutions : http://stackoverflow.com/questions/2135509/bug-only-occurring-when-compile-optimization-enabled. It does the same with release = optimisations on, x86, x64 and AnyCPU Goes away with optimisations off. Im using V2005 on a x64 win7 machine, if that matters. Thanks

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • Setting default values for inherited property without using accessor in Objective-C?

    - by Ben Stock
    I always see people debating whether or not to use a property's setter in the -init method. I don't know enough about the Objective-C language yet to have an opinion one way or the other. With that said, lately I've been sticking to ivars exclusively. It seems cleaner in a way. I don't know. I digress. Anyway, here's my problem … Say we have a class called Dude with an interface that looks like this: @interface Dude : NSObject { @private NSUInteger _numberOfGirlfriends; } @property (nonatomic, assign) NSUInteger numberOfGirlfriends; @end And an implementation that looks like this: @implementation Dude - (instancetype)init { self = [super init]; if (self) { _numberOfGirlfriends = 0; } } @end Now let's say I want to extend Dude. My subclass will be called Playa. And since a playa should have mad girlfriends, when Playa gets initialized, I don't want him to start with 0; I want him to have 10. Here's Playa.m: @implementation Playa - (instancetype)init { self = [super init]; if (self) { // Attempting to set the ivar directly will result in the compiler saying, // "Instance variable `_numberOfGirlfriends` is private." // _numberOfGirlfriends = 10; <- Can't do this. // Thus, the only way to set it is with the mutator: self.numberOfGirlfriends = 10; // Or: [self setNumberOfGirlfriends:10]; } } @end So what's a Objective-C newcomer to do? Well, I mean, there's only one thing I can do, and that's set the property. Unless there's something I'm missing. Any ideas, suggestions, tips, or tricks? Sidenote: The other thing that bugs me about setting the ivar directly — and what a lot of ivar-proponents say is a "plus" — is that there are no KVC notifications. A lot of times, I want the KVC magic to happen. 50% of my setters end in [self setNeedsDisplay:YES], so without the notification, my UI doesn't update unless I remember to manually add -setNeedsDisplay. That's a bad example, but the point stands. I utilize KVC all over the place, so without notifications, things can act wonky. Anyway, any info is much appreciated. Thanks!

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • TechEd 2010 Day One – How I Travel

    - by BuckWoody
    Normally when I blog on the first day of a conference, well, there hasn’t been a first day yet. So I talk about the value of a conference or some other facet. And normally in my (non-conference) blogs, I show you how I have learned to be a data professional – things I’ve learned how to do over the years. But in all that time, I don’t think I’ve ever talked about a big part of my job – traveling. I’ve traveled a lot throughout the years, when I’ve taught, gone to conferences, consulted and in my current role assisting Microsoft customers with large-scale database system designs.  So I’ll share a few thoughts about what I do. Keep in mind that I travel for short durations, just a day or so, and sometimes I travel internationally. For those I prepare differently – what I’m talking about here is what I do for a multi-day, same-country trip. Hopefully you find it useful. I’ll tag a few other travelers I know to add their thoughts.  Preparing for Travel   When I’m notified of a trip, I begin researching the location. I find the flights, hotel and (if I have to) a car to use while I’m away. We have an in-house system we use to book the travel, but when I travel not-for-Microsoft I use Expedia and Kayak to find what I need.  Traveling on Sunday and Friday is the worst. I have to do it sometimes (like this week) and it’s always a bad idea. But you can blunt the impact by booking as early as you can stand it. That means I have to be up super-early, but the flights are normally on time. I stay flexible, and always have a backup plan in case the flights are delayed or canceled.  For the hotel, I tend to go on the cheaper side, and I look for older hotels that have been renovated, or quirky ones. For instance, in Boise, ID recently I stayed at a 60’s-themed (think Mad-Men) hotel that was very cool. Always I go on the less expensive side – I find the “luxury” hotels nail me for Internet, food, everything. The cheaper places include all kinds of things, and even have breakfasts, shuttles and all kinds of things that start to add up. I even call ahead to make sure there’s an iron and ironing board available, since I’ll need those when I get there.  I find any way I can not to get a car. I use mass-transit wherever possible, and try to make friends and pay their gas to take me places. In a pinch, I’ll use a taxi. It ends up being cheaper, faster, and less stressful all around.  Packing  Over the years I’ve learned never to check luggage whenever I can. To do that, I lay out everything I want to take with me on the bed, and then try and make sure I’m really going to use it. I wear a dark wool set of pants, which I can clean and wear in hot and cold climates. I bring undies and socks of course, and for most places I have to wear “dress up” shirts. I bring at least two print T-Shirts in case I want to dress down for something while I’m gone, but I only bring one set of shoes. All the  clothes are rolled as tightly as possible as I learned in the military. Then I use those to cushion the electronics I take.  For toiletries I bring a shaver, toothpaste and toothbrush, D/O and a small brush. Everything else the hotel will provide.  For entertainment, I take a small Zune, a full PC-Headset (so I can make IP calls on the road) and my laptop. I don’t take books or anything else – everything is electronic. I use E-books (downloaded from our Library), Audio-Books (on the Zune) and I also bring along a Kaossilator (more here) to play music in the hotel room or even on the plane without being heard.  If I can, I pack into one roll-on bag. There’s not a lot better than this one, but I also have a Bag I was given as a prize for something or other here at Microsoft. Either way, I like something with less pockets and more big, open compartments. Everything gets rolled up and packed in, with all of the wires and charges in small bags my wife made for me. The laptop (and anything I don’t want gate-checked) goes on top or in an outside pouch so I can grab it quickly if I have to gate-check the bag. As much as I can, I try to go in one bag. When I can’t (like this week) I use this bag since it can expand, roll up, crush and even be put away later. It’s super-heavy canvas and worth the price. This allows me to not check a bag.  Journey Logistics The day of the trip, I have everything ready since I’m getting up early. I pack a few small snacks inside a plastic large-mouth water bottle, which protects the snacks and lets me get water in the terminal. I bring along those little powdered drink mixes to add to the water.  At the airport, I make a beeline for the power-outlets. I charge up my laptop and phone, and download all my e-mails so I can work on them off-line in the air. I don’t travel as often as I used to – just every month or so now, so I don’t have a membership to an airline club. If I travel much more, I’ll invest in one again – they are WELL worth the money, for the wifi, food and quiet if for nothing else.  I print out my logistics on paper and put that in my pocket – flight numbers, hotel addresses and phones for everything. That way if I have to make a change, I don’t have to boot up anything or even have power to be able to roll with the punches if things change.  Working While Away  While I’m away I realize I’m going to be swamped with things at the conference or with my clients. So I turn on Out-Of-Office notifications to let people know I won’t be as responsive, and I keep my Outlook calendar up to date so my co-workers know what I’m up to. I even update it with hotel and phone info in case they really need to reach me. I share my calendar with my wife so my family knows what I’m doing as well.  I check my e-mail during breaks, but I only respond to them in the evening or early morning at the hotel. I tweet during conferences. The point is to be as present as possible during the event or when I’m at the clients. Both deserve it.  So those are my initial thoughts. I’ll tag Brent Ozar, Brad McGeHee and Paul Randal, and they can tag whomever they wish. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Too Many Kittens To Juggle At Once

    - by Bil Simser
    Ahh, the Internet. That crazy, mixed up place where one tweet turns into a conversation between dozens of people and spawns a blogpost. This is the direct result of such an event this morning. It started innocently enough, with this: Then followed up by a blog post by Joel here. In the post, Joel introduces us to the term Business Solutions Architect with mad skillz like InfoPath, Access Services, Excel Services, building Workflows, and SSRS report creation, all while meeting the business needs of users in a SharePoint environment. I somewhat disagreed with Joel that this really wasn’t a new role (at least IMHO) and that a good Architect or BA should really be doing this job. As Joel pointed out when you’re building a SharePoint team this kind of role is often overlooked. Engineers might be able to build workflows but is the right workflow for the right problem? Michael Pisarek wrote about a SharePoint Business Architect a few months ago and it’s a pretty solid assessment. Again, I argue you really shouldn’t be looking for roles that don’t exist and I don’t suggest anyone create roles to hire people to fill them. That’s basically creating a solution looking for problems. Michael’s article does have some great points if you’re lost in the quagmire of SharePoint duties though (and I especially like John Ross’ quote “The coolest shit is worthless if it doesn’t meet business needs”). SharePoinTony summed it up nicely with “SharePoint Solutions knowledge is both lacking and underrated in most environments. Roles help”. Having someone on the team who can dance between a business user and a coder can be difficult. Remember the idea of telling something to someone and them passing it on to the next person. By the time the story comes round the circle it’s a shadow of it’s former self with little resemblance to the original tale. This is very much business requirements as they’re told by the user to a business analyst, written down on paper, read by an architect, tuned into a solution plan, and implemented by a developer. Transformations between what was said, what was heard, what was written down, and what was developed can be distant cousins. Not everyone has the skill of communication and even less have negotiation skills to suit the SharePoint platform. Negotiation is important because not everything can be (or should be) done in SharePoint. Sometimes it’s just not appropriate to build it on the SharePoint platform but someone needs to know enough about the platform and what limitations it might have, then communicate that (and/or negotiate) with a customer or user so it’s not about “You can’t have this” to “Let’s try it this way”. Visualize the possible instead of denying the impossible. So what is the right SharePoint team? My cromag brain came with a fairly simpleton answer (and I’m sure people will just say this is a cop-out). The perfect SharePoint team is just enough people to do the job that know the technology and business problem they’re solving. Bridge the gap between business need and technology platform and you have an architect. Communicate the needs of the business effectively so the entire team understands it and you have a business analyst. Can you get this with full time workers? Maybe but don’t expect miracles out of the gate. Also don’t take a consultant’s word as gospel. Some consultants just don’t have the diversity of the SharePoint platform to be worth their value so be careful. You really need someone who knows enough about SharePoint to be able to validate a consultants knowledge level. This is basically try for any consultant, not just a SharePoint one. Specialization is good and needed. A good, well-balanced SharePoint team is one of people that can solve problems with work with the technology, not against it. Having a top developer is great, but don’t rely on them to solve world hunger if they can’t communicate very well with users. An expert business analyst might be great at gathering requirements so the entire team can understand them, but if it means building 100% custom solutions because they don’t fit inside the SharePoint boundaries isn’t of much value. Just repeat. There is no silver bullet. There is no silver bullet. There is no silver bullet. A few people pointed out Nick Inglis’ article Excluding The Information Professional In SharePoint. It’s a good read too and hits home that maybe some developers and IT pros need some extra help in the information space. If you’re in an organization that needs labels on people, come up with something everyone understands and go with it. If that’s Business Solutions Architect, SharePoint Advisor, or Guy Who Knows A Lot About Portals, make it work for you. We all wish that one person could master all that is SharePoint but we also know that doesn’t scale very well and you quickly get into the hit-by-a-bus syndrome (with the organization coming to a full crawl when the guy or girl goes on vacation, gets sick, or pops out a baby). There are too many gaps in SharePoint knowledge to have any one person know it all and too many kittens to juggle all at once. We like to consider ourselves experts in our field, but trying to tackle too many roles at once and we end up being mediocre jack of all trades, master of none. Don't fall into this pit. It's a deep, dark hole you don't want to try to claw your way out of. Trust me. Been there. Done that. Got the t-shirt. In the end I don’t disagree with Joel. SharePoint is a beast and not something that should be taken on by newbies. If you just read “Teach Yourself SharePoint in 24 Hours” and want to go build your corporate intranet or the next killer business solution with all your new found knowledge plan to pony up consultant dollars a few months later when everything goes to Hell in a handbasket and falls over. I’m not saying don’t build solutions in SharePoint. I’m just saying that building effective ones takes skill like any craft and not something you can just cobble together with a little bit of cursory knowledge. Thanks to *everyone* who participated in this tweet rush. It was fun and educational.

    Read the article

  • 24+ Coda Alternatives for Windows and Linux

    - by Matt
    Coda plays an important role in designing layout on Mac. There are numerous coda alternatives for windows and Linux too. It is not possible to describe each and everyone so some of the coda alternatives, which work on both windows and Linux platforms, are discussed below. EditPlus $35.00 Good thing about EditPlus is that it highlights URLs and email addresses, activating them when you ‘crtl + double-click’. It also has a built in browser for previewing HTML, and FTP and SFTP support. Also supports Macros and RegEx find and replace. UltraEdit $49.99 It is another good coda alternative for windows and Linux. It is the best suited editor for text, HTML and HEX. It also plays an advanced PHP, Perl, Java and JavaScript editor for programmers. It supports disk-based 64-bit or standard file handling on 32-bit Windows platforms or window 2000 and later versions. HippoEdit $39.95 HippoEDIT has the best autocomplete it gives pop a ‘tooltip’ above your cursor as you type, suggesting words you’ve already typed. It does syntax highlighting for over 2 dozen language. Sublime Text $59.00 Sublime Text awesome ‘zoomed out’ view of the file lets you focus on the area you want. It lets you open a local file when you right-click on its link, and there are a few automation features, so this would make a solid choice of a text editor. Textpad $24.70 TextPad is simple editor with nifty features such as column select, drag-and-drop text between files, and hyperlink support. It also supports large files. Aptana Free Aptana Studio is one of the best editors working on both windows and Linux. It is a complete web development setting that has a nice blend of powerful authoring tools with a collection of online hosting and collaboration services. It is quite helpful as it support for PHP, CSS, FTP, and more. SciTE Free It is a SCIntilla based Text Editor. It has gradually developed as a generally useful editor. It provides for building and running programs. It is best to be used for jobs with simple configurations. SciTE is currently available for Intel Win32 and Linux compatible operating systems with GTK+. It has been run on Windows XP and on Fedora 8 and Ubuntu 7.10 with GTK+ 2.12 E Text Editor $34.96 E Text Editor is a new text editor for Windows, which also works on Linux as well. It has powerful editing features and also some unique abilities. It makes text manipulation quite fast and easy, and makes user focus on his writing as it automatically does all the manual work. It can be extend it in any language. It supports Text Mate bundles, thus allows the user to tap into a huge and active community. Editra Free Editra is an upcoming editor, with some fantastic features such as user profiles, auto-completion, session saving, and syntax highlighing for 60+ languages. Plugins can extend the feature set, offering an integrated python console, FTP client, file browser, and calculator, among others. PSPad Free PSPad is a good Template for writing CSS, as it an internal web browser, and a macro recorder to the table. It also supports hex editing, and some degree of code compiling. JEdit Free It is a mature programmer’s text editor and has taken a good deal of time to be developed as it is today. It is better than many costlier development tools due to its features and simplicity of use. It has been released as free software with full source code, provided under the terms of the GPL 2.0. Which also adds to its attractiveness. NEdit Free It is a multi-purpose text editor for the X Window System, which also works on Linux. It combines a standard, easy to use, graphical user interface with the full functionality and stability required by users who edit text for long period a day. It also provides for thorough support for development in various languages. It also facilitates the use of text processors, and other tools at the same time. It can be used productively by anyone who needs to edit text. It is quite a user-friendly tool. Its salient features include syntax highlighting with built in pattern, auto indent, tab emulation, block indentation adjustment etc. As of version 5.1, NEdit may be freely distributed under the terms of the GNU General Public License. MadEdit Free Mad Edit is an Open-Source and Cross-Platform Text/Hex Editor. It is written in C++ and wxWidgets. MadEdit can edit files in Text/Column/Hex modes. It also supports many useful functions, such as Syntax Highlighting, Word Wrap, Encoding for UTF8/16/32,and others. It also supports word count, which makes it quite a useful text editor for both windows and Linux. It has been recently modified on 10/09/2010. KompoZer Free Kompozer is a complete web authoring system that has a combination of web file management and easy-to-use WYSIWYG web page editing. KompoZer has been designed to be completely and extensively easy to use. It is thus an ideal tool for non-technical computer users who want to create an attractive, professional-looking web site without knowing HTML or web coding. It is based on the NVU source code. Vim Free Vim or “Vi IMproved” is an advanced text editor. Its salient features are syntax highlighting, word completion and it also has a huge amount of contributed content. Vim has several “modes” on offer for editing, which adds to the efficiency in editing. Thus it becomes a non-user-friendly application but it is also strength for its users. The normal mode binds alphanumeric keys to task-oriented commands. The visual mode highlights text. More tools for search & replace, defining functions, etc. are offered through command line mode. Vim comes with complete help. NotePad ++ Free One of the the best free text editor for Windows out there; with support for simple things—like syntax highlighting and folding—all the way up to FTP, Notepad++ should tick most of the boxes Notepad2 Free Notepad2 is also based on the Scintilla editing engine, but it’s much simpler than Notepad++. It bills itself as being fast, light-weight, and Notepad-like. Crimson Editor Free Crimson Editor has the ability to edit remote files, using a built-in FTP client; there’s also a spell checker. TotalEdit Free TotalEdit allows file comparison, RegEx search and replace, and has multiple options for file backup / versioning. For cleanup, it offers (X)HTML and XML customizable formatting, and a spell checker. In-Type Free ConTEXT Free SourceEdit Free SourceEdit includes features such as clipboard history, syntax highlighting and autocompletion for a decent set of languages. A hex editor and FTP client. RJ TextED Free RJ TextED supports integration with TopStyle Lite. Provides HTML validation and formatting. It includes an FTP client, a file browser, and a code browser, as well as a character map and support for email. GEDIT Free It is one of the best coda alternatives for windows and Linux. It has syntax highlighting and is best suitable for programming. It has many attractive features such as full support for UTF-8, undo/redo, and clipboard support, search and replace, configurable syntax highlighting for various languages and many more supportive features. It is extensible with plug ins. Other important coda alternatives for windows and Linux are Redcar, Bluefish Editor, NVU, Ruby Mine, Slick Edit, Geany, Editra, txt2html and CSSED. There are many more. Its up to user to decide which one suits best to his requirements. Related posts:10 Useful Text Editor For Developer Applications to Install & Run Windows on Linux Open Source WYSIWYG Text Editors

    Read the article

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • Microsoft and the open source community

    - by Charles Young
    For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed. Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism. a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source. b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying. c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so.  One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools. All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries. Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge. Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.

    Read the article

  • It was a figure of speech!

    - by Ratman21
    Yesterday I posted the following as attention getter / advertisement (as well as my feelings). In the groups, (I am in) on the social networking site, LinkedIn and boy did I get responses.    I am fighting mad about (a figure of speech, really) not having a job! Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can no longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage. I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!     This was the responses I got.   I hear you. We just need to retrain and get our skills up to speed is all. That is what I am doing. I have not given up. Just got to stay on top of the game. Experience is on our side if we have the credentials and we are reasonable about our salaries this should not be an issue.   Already on it, going back to school and have got three certifications (CompTIA A+, Security+ and Network+. I am now studying for my CISCO CCNA certification. As to my salary, I am willing to work at very reasonable rate.   You need to re-brand yourself like a product, market and sell yourself. You need to smarten up, look and feel a million dollars, re-energize yourself, regain your confidents. Either start your own business, or re-write your CV so it stands out from the rest, get the template off the internet. Contact every recruitment agent in your town, state, country and overseas, and on the web. Apply to every job you think you could do, you may not get it but you will make a contact for your network, which may lead to a job at the end of the tunnel. Get in touch with everyone you know from past jobs. Do charity work. I maintain the IT Network, stage electrical and the Telecom equipment in my church,   Again already on it. I have email the world is seems with my resume and cover letters. So far, I have rewritten or had it rewrote, my resume and cover letters; over seven times so far. Re-energize? I never lost my energy level or my self-confidents in my work (now if could get some HR personal to see the same). I also volunteer at my church, I created and maintain the church web sit.   I share your frustration. Sucks being over 50 and looking for work. Please don't sell yourself short at min wage because the employer will think that’s your worth. Keep trying!!   I never stop trying and min wage is only for 90 days. If some one takes up the challenge. Some post asked if I am keeping up technology.   Do you keep up with the latest technology and can speak the language fluidly?   Yep to that and as to speaking it also a yep! I am a geek you know. I heard from others over the 50 year mark and younger too.   I'm with you! I keep getting told that I don't have enough experience because I just recently completed a Masters level course in Microsoft SQL Server, which gave me a project-intensive equivalent of between 2 and 3 years of experience. On top of that training, I have 19 years as an applications programmer and database administrator. I can normalize rings around experienced DBAs and churn out effective code with the best of them. But my 19 years is worthless as far as most recruiters and HR people are concerned because it is not the specific experience for which they're looking. HR AND RECRUITERS TAKE NOTE: Experience, whatever the language, translates across platforms and technology! By the way, I'm also over 55 and still have "got it"!   I never lost it and I also can work rings round younger techs.   I'm 52 and female and seem to be having the same issues. I have over 10 years experience in tech support (with a BS in CIS) and can't get hired either.   Ow, I only have an AS in computer science along with my certifications.   Keep the faith, I have been unemployed since August of 2008. I agree with you...I am willing to return to the beginning of my retail career and work myself back through the ranks, if someone will look past the grey and realize the knowledge I would bring to the table.   I also would like some one to look past the gray.   Interesting approach, volunteering to work for minimum wage for 90 days. I'm in the same situation as you, being 55 & balding w/white hair, so I know where you're coming from. I've been out of work now for a year. I'm in Michigan, where the unemployment rate is estimated to be 15% (the worst in the nation) & even though I've got 30+ years of IT experience ranging from mainframe to PC desktop support, it's difficult to even get a face-to-face interview. I had one prospective employer tell me flat out that I "didn't have the energy required for this position". Mostly I never get any feedback. All I can say is good luck & try to remain optimistic.   He said WHAT! Yes remaining optimistic is key. Along with faith in God. Then there was this (for lack of better word) jerk.   Give it up already. You were too old to work in high tech 10 years ago. Scratch that, 20 years ago! Try selling hot dogs in front of Fry's Electronics. At least you would get a chance to eat lunch with your previous colleagues....   You know funny thing on this person is that I checked out his profile. He is older than I am.

    Read the article

  • Silverlight IConvertible TypeConverter

    - by codingbloke
    I recently answered the following question on stackoverflow:  Silverlight 3 custom control: only ‘int’ as numeric type for a property? [e.g. long or int64 seems to break] I quickly knocked up the class ConvertibleTypeConverter<T> that I posted in the question (listed later here as well). Afterward I fully expected to find that of the usual clever “bods who blog” to have covered this probably with a better solution than I.  So far though I’ve not found one so I thought I’d blog it myself. The Problem Here is a classic gotcha I’ve seen asked more than once on stackoverflow :- public class MyClass {     public float SomeValue { get; set; } } <local:MyClass SomeValue="45.15" /> This fails with the error  “Failed to create a 'System.Single' from the text '45.15'”  and results in much premature hair loss.  Fortunately this is SL4, in SL3 the error message is almost meaningless.  So what gives, how can it be that this fails when we can see other very similar values parsing happily all over the place? It comes down the fact that the Xaml parser only handles a few of the primitive data types namely: bool, int, string and double.  Since the parser has no idea how to convert a string to a float we get the above error. The Solution The sensible solution is “use double not float” but lets not dwell on that, there has to be occasions where such an answer isn’t acceptable. In order to achieve parsing of other types we need an implementation of TypeConverter for the type of the property and then we need to use the TypeConverterAttribute to decorate the property .  As an example the Silverlight SDK provides one for DateTime the DateTimeTypeConverter (yes I know DateTime isn’t really a primitive). The following class will parse in Xaml:- public class MyClass {     [TypeConverter(typeof(DateTimeTypeConverter))]     public DateTime SomeValue {get; set; } } So far though we would need to create a TypeConverter for each primitive type we are using, what if I had the following mad class to support in Xaml:- public class StrangePrimitives {     public Boolean BooleanProp { get; set; }     public Byte ByteProp { get; set; }     public Char CharProp { get; set; }     public DateTime DateTimeProp { get; set; }     public Decimal DecimalProp { get; set; }     public Double DoubleProp { get; set; }     public Int16 Int16Prop { get; set; }     public Int32 Int32Prop { get; set; }     public Int64 Int64Prop { get; set; }     public SByte SByteProp { get; set; }     public Single SingleProp { get; set; }     public String StringProp { get; set; }     public UInt16 UInt16Prop { get; set; }     public UInt32 UInt32Prop { get; set; }     public UInt64 UInt64Prop { get; set; } } Then I want to fill an instance of StrangePrimitives with the following Xaml which of course fails. <local:StrangePrimitives x:Key="MyStrangePrimitives"                          BooleanProp="True"                          ByteProp="156"                          CharProp="A"                          DateTimeProp="06 Jun 2010"                          DecimalProp="123.56"                          DoubleProp="8372.937803"                          Int16Prop="16532"                          Int32Prop="73738248"                          Int64Prop="12345678909298"                          SByteProp="-123"                          SingleProp="39.0"                          StringProp="Hello, World!"                          UInt16Prop="40000"                          UInt32Prop="4294967295"                          UInt64Prop="18446744073709551615"      /> I got to thinking, though, one thing all these primitive types have in common is that they all implement IConvertible so it should be possible to write just one converter to handle them all.  Here it is:- The ConvertibleTypeConverter public class ConvertibleTypeConverter<T> : TypeConverter where T : IConvertible {     public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)     {         return sourceType.GetInterface("IConvertible", false) != null;     }     public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)     {         return destinationType.GetInterface("IConvertible", false) != null;     }     public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)     {         return ((IConvertible)value).ToType(typeof(T), culture);     }     public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)     {         return ((IConvertible)value).ToType(destinationType, culture);     } } I won’t bore you with an explanation of how it works, it simply adapts one existing interface (the IConvertible) and exposes it as another (the TypeConverter).   With that in place the previous strange primitives class can be modified as:- public class StrangePrimitives {     public Boolean BooleanProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Byte>))]     public Byte ByteProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Char>))]     public Char CharProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<DateTime>))]     public DateTime DateTimeProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Decimal>))]     public Decimal DecimalProp { get; set; }     public Double DoubleProp {get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Int16>))]     public Int16 Int16Prop { get; set; }     public Int32 Int32Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Int64>))]     public Int64 Int64Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<SByte>))]     public SByte SByteProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Single>))]     public Single SingleProp { get; set; }     public String StringProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt16>))]     public UInt16 UInt16Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt32>))]     public UInt32 UInt32Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt64>))]     public UInt64 UInt64Prop { get; set; } } This results in the previous Xaml parsing happily.  Now it seems such an obvious thing to do that one may wonder why such a class doesn’t already existing in Silverlight or at least in the SDK.   I would not be surprised if there were some very good reasons hence use the ConvertibleTypeConverter with caution.  It does seem to me to be a useful little class to have lying around in the toolbox for the odd occasion where it may be needed.

    Read the article

  • OS8- AK8- The bad news...

    - by Steve Tunstall
    Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of. First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8. AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck. All 7x20 series, all of them regardless of age, are supported with AK8. Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server. These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported. Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them. So what's the problem? The problem is going to be for people who have to mix drive trays. Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards. Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right? That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.   Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

    Read the article

  • Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe)

    - by Millisami
    Hi, Upgraded to Rails 2.3.2 and Passenger 2.2.4 on Ubuntu hardy slice at slicehost with Apache2 I'm getting this same above discussed error in my Apache error.log of system /var/logs/apache2/ [ pid=4249 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:47:32.752 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4391): from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `write' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `process_request' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_request_handler.rb:206:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:376:in `start_request_handler' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:334:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/utils.rb:182:in `safe_fork' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:332:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:162:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:213:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:261:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:126:in `lookup_or_add' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:255:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:80:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:79:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:254:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:153:in `spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:286:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/bin/passenger-spawn- server:61 *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4383): and these too. pid=4362 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.251 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [ pid=4298 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.255 ]: No data received from the backend application (process 4252) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [Sat Jul 04 11:55:19 2009] [error] [client 86.96.226.13] Premature end of script headers: 41, referer: http://domain.com/ [ pid=4373 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.559 ]: Its getting me mad and on the browser, sometimes its show and when refreshed, Application Error 500 shows up in frequent basis. any directions??

    Read the article

  • Netbeans platform projects - problems with wrapped jar files that have dependencies

    - by I82Much
    For starters, this question is not so much about programming in the NetBeans IDE as developing a NetBeans project (e.g. using the NetBeans Platform framework). I am attempting to use the BeanUtils library to introspect my domain models and provide the properties to display in a property sheet. Sample code: public class MyNode extends AbstractNode implements PropertyChangeListener { private static final PropertyUtilsBean bean = new PropertyUtilsBean(); // snip protected Sheet createSheet() { Sheet sheet = Sheet.createDefault(); Sheet.Set set = Sheet.createPropertiesSet(); APIObject obj = getLookup().lookup (APIObject.class); PropertyDescriptor[] descriptors = bean.getPropertyDescriptors(obj); for (PropertyDescriptor d : descriptors) { Method readMethod = d.getReadMethod(); Method writeMethod = d.getWriteMethod(); Class valueType = d.getClass(); Property p = new PropertySupport.Reflection(obj, valueType, readMethod, writeMethod); set.put(p); } sheet.put(set); return sheet; } I have created a wrapper module around commons-beanutils-1.8.3.jar, and added a dependency on the module in my module containing the above code. Everything compiles fine. When I attempt to run the program and open the property sheet view (i.e.. the above code actually gets run), I get the following error: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:259) Caused: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory starting from ModuleCL@64e48e45[org.apache.commons.beanutils] with possible defining loaders [ModuleCL@75da931b[org.netbeans.libs.commons_logging]] and declared parents [] at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:261) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:399) Caused: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at org.apache.commons.beanutils.PropertyUtilsBean.<init>(PropertyUtilsBean.java:132) at org.myorg.myeditor.MyNode.<clinit>(MyNode.java:35) at org.myorg.myeditor.MyEditor.<init>(MyEditor.java:33) at org.myorg.myeditor.OpenEditorAction.actionPerformed(OpenEditorAction.java:13) at org.openide.awt.AlwaysEnabledAction$1.run(AlwaysEnabledAction.java:139) at org.netbeans.modules.openide.util.ActionsBridge.implPerformAction(ActionsBridge.java:83) at org.netbeans.modules.openide.util.ActionsBridge.doPerformAction(ActionsBridge.java:67) at org.openide.awt.AlwaysEnabledAction.actionPerformed(AlwaysEnabledAction.java:142) at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2028) at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2351) at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387) at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242) at javax.swing.AbstractButton.doClick(AbstractButton.java:389) at com.apple.laf.ScreenMenuItem.actionPerformed(ScreenMenuItem.java:95) at java.awt.MenuItem.processActionEvent(MenuItem.java:627) at java.awt.MenuItem.processEvent(MenuItem.java:586) at java.awt.MenuComponent.dispatchEventImpl(MenuComponent.java:317) at java.awt.MenuComponent.dispatchEvent(MenuComponent.java:305) [catch] at java.awt.EventQueue.dispatchEvent(EventQueue.java:638) at org.netbeans.core.TimableEventQueue.dispatchEvent(TimableEventQueue.java:125) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) I understand that beanutils is using the commons-logging component. I have tried adding the commons-logging component in two different ways (creating a wrapper library around the commons-logging library, and putting a dependency on the Commons Logging Integration library). Neither solves the problem. I noticed that the same problem occurs with other wrapped libraries; if they themselves have external dependencies, the ClassNotFoundExceptions propagate like mad, even if I've wrapped the jars of the libraries they require and added them as dependencies to the original wrapped library module. Pictorially: I'm at my wits end here. I noticed similar problems while googling: Is there a known bug on NB Module dependency Same issue I'm facing but when wrapping a different jar NetBeans stance on this - none of the 3 apply to me. None conclusively help me. Thank you, Nick

    Read the article

  • What's the most unsound program you've had to maintain?

    - by Robert Rossney
    I periodically am called upon to do maintenance work on a system that was built by a real rocket surgeon. There's so much wrong with it that it's hard to know where to start. No, wait, I'll start at the beginning: in the early days of the project, the designer was told that the system would need to scale, and he'd read that a source of scalability problems was traffic between the application and database servers, so he made sure to minimize this traffic. How? By putting all of the application logic in SQL Server stored procedures. Seriously. The great bulk of the application functions by the HTML front end formulating XML messages. When the middle tier receives an XML message, it uses the document element's tag name as the name of the stored procedure it should call, and calls the SP, passing it the entire XML message as a parameter. It takes the XML message that the SP returns and returns it directly back to the front end. There is no other logic in the application tier. (There was some code in the middle tier to validate the incoming XML messages against a library of schemas. But I removed it, after ascertaining that 1) only a small handful of messages had corresponding schemas, 2) the messages didn't actually conform to these schemas, and 3) after validating the messages, if any errors were encountered, the method discarded them. "This fuse box is a real time-saver - it comes from the factory with pennies pre-installed!") I've seen software that does the wrong thing before. Lots of it. I've written quite a bit. But I've never seen anything like the steely-eyed determination to do the wrong thing, at every possible turn, that's embodied in the design and programming of this system. Well, at least he went with what he knew, right? Um. Apparently, what he knew was Access. And he didn't really understand Access. Or databases. Here's a common pattern in this code: SELECT @TestCodeID FROM TestCode WHERE TestCode = @TestCode SELECT @CountryID FROM Country WHERE CountryAbbr = @CountryAbbr SELECT Invoice.*, TestCode.*, Country.* FROM Invoice JOIN TestCode ON Invoice.TestCodeID = TestCode.ID JOIN Country ON Invoice.CountryID = Country.ID WHERE Invoice.TestCodeID = @TestCodeID AND Invoice.CountryID = @CountryID Okay, fine. You don't trust the query optimizer either. But how about this? (Originally, I was going to post this in What's the best comment in source code you have ever encountered? but I realized that there was so much more to write about than just this one comment, and things just got out of hand.) At the end of many of the utility stored procedures, you'll see code that looks like the following: -- Fix NULLs SET @TargetValue = ISNULL(@TargetValue, -9999) Yes, that code is doing exactly what you can't allow yourself to believe it's doing lest you be driven mad. If the variable contains a NULL, he's alerting the caller by changing its value to -9999. Here's how this number is commonly used: -- Get target value EXEC ap_GetTargetValue @Param1, @Param2, OUTPUT @TargetValue -- Check target value for NULL value IF @TargetValue = -9999 ... Really. For another dimension of this system, see the article on thedailywtf.com entitled I Think I'll Call Them "Transactions". I'm not making any of this up. I swear. I'm often reminded, when I work on this system, of Wolfgang Pauli's famous response to a student: "That isn't right. It isn't even wrong." This can't really be the very worst program ever. It's definitely the worst one I've worked

    Read the article

  • CakeDC Users Plugin - I Can't Send Emails

    - by JimBadger
    I apologise for the rambling nature of this question, please bear with me and I'll provide all the extra info needed for you to stop me going mad from failing at something that looks inherently very straightforward... I've just installed CakePHP 2.2, and the first thing I've done is add the cakeDC Users plugin. It's all working, apart from sending an email verification when a user registers. I've tried so many combinations of different things in email.php, that I have now utterly got my knickers in a twist. Whatever I do, when the verification email should be sent, all I get is: No connection could be made because the target machine actively refused it. My email.php currently looks like this: class EmailConfig { public $default = array( 'transport' => 'Smtp', 'from' => '[email protected]', //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); public $smtp = array( 'transport' => 'Smtp', 'from' => array('Blah <[email protected]>' => 'Chimp'), 'host' => 'ssl://smtp.gmail.com', 'port' => 465, 'timeout' => 30, 'username' => '[email protected]', 'password' => 'secret', 'client' => null, 'log' => false, //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); public $fast = array( 'from' => '[email protected]', 'sender' => null, 'to' => null, 'cc' => null, 'bcc' => null, 'replyTo' => null, 'readReceipt' => null, 'returnPath' => null, 'messageId' => true, 'subject' => null, 'message' => null, 'headers' => null, 'viewRender' => null, 'template' => false, 'layout' => false, 'viewVars' => null, 'attachments' => null, 'emailFormat' => null, 'transport' => 'Smtp', 'host' => 'blah.net', 'port' => 25, 'timeout' => 30, 'username' => 'user', 'password' => 'secret', 'client' => null, 'log' => true, //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); } How do I get cakeDC Users plugin to just send a non-SMTP email? Or do I have to use, for example, my Gmail details? But, if I do have to go down the SMTP route, what is wrong with the above? Other info: I'm using the latest version of XAMPP and my PHP install is ssl enabled.

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

< Previous Page | 8 9 10 11 12 13  | Next Page >