Search Results

Search found 20433 results on 818 pages for 'marketing always wins'.

Page 361/818 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • MySQL replication Slave_IO_Running: No

    - by Christy
    Hi all, I have two servers that I am trying to get replication of one database between. I found a setup guide on sourceforge that I followed and I have tried various other settings since then, but no matter what I do, when I start the slave, the 'Slave_IO_Running' setting is always No.... I have no idea why or what to look at, any suggestions are appreciated. The slave setup was: CHANGE MASTER TO MASTER_HOST='myserver.mydomain.net', MASTER_USER='slave_user', 'MASTER_PASSWORD='mypassword', 'MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=1368363 (last data from today, trying to do setup again. I deleted and recreated the database on the slave from a new dump and tried to redo the setup.) I have slave_user setup for %, localhost, and the specific IP of the slave computer but nothing seems to be working... Thanks in advance for any advice or suggestions

    Read the article

  • Server 2003 on domain wont let domain user have local profile

    - by RobW
    I have a few servers that are acting in this behavior, you log in and always get put into a temporary profile. The server is licensed for TS. The user I am testing with has local admin rights so it doesn't seem to be a permission issue on the server. I'll first get a message that the users roaming profile cannot be found, even though we dont use roaming profiles. I then get another message immediately after saying a local profile could not be loaded, so it will only use a temp profile. Any help would be greatly appreciated.

    Read the article

  • I want to make video games, but I hate coding

    - by hoper
    I know this sounds eally crazy. However, I just want to ask. Now, I am studying C++ code in my school (My major is computer programming). Honestly, my grade is not so good, and assignments are really hard. Sometimes, I feel sad that I will spend 8~10 hours per day for coding (which is stressful) at the future for my job. But, I still want to make video games. Maybe this is the only one reason why I am taking all of stressful courses. I always write down plots, stories, characters, fictional gaming worlds. Once, I thought I should study artistic technology such as game design program not computer technology such as C++, C#, etc. However, most of popular game designers(or directors) such as Kojima, Miyamoto Shigeru, etc used to be good programmers. And, companies actaully assign programmers to directors because they understand how to make a game. I try to find other colleges or universities where teach game design program. However, one article that lists rank 10 game design schools in North America seems untrustful because the survey company only scores it from intervews of students. (Once, I tried to attend Art Institute of Vancouver which is rank 7 according to that article. However, one programmer who used to be an instructor in there told me the truth. That is the employement rate of graduated students is low) Do you guys have any advice for me?

    Read the article

  • SQLAuthority News – Fast Track Data Warehouse 3.0 Reference Guide

    - by pinaldave
    http://msdn.microsoft.com/en-us/library/gg605238.aspx I am very excited that Fast Track Data Warehouse 3.0 reference guide has been announced. As a consultant I have always enjoyed working with Fast Track Data Warehouse project as it truly expresses the potential of the SQL Server Engine. Here is few details of the enhancement of the Fast Track Data Warehouse 3.0 reference architecture. The SQL Server Fast Track Data Warehouse initiative provides a basic methodology and concrete examples for the deployment of balanced hardware and database configuration for a data warehousing workload. Balance is measured across the key components of a SQL Server installation; storage, server, application settings, and configuration settings for each component are evaluated. Description Note FTDW 3.0 Architecture Basic component architecture for FT 3.0 based systems. New Memory Guidelines Minimum and maximum tested memory configurations by server socket count. Additional Startup Options Notes for T-834 and setting for Lock Pages in Memory. Storage Configuration RAID1+0 now standard (RAID1 was used in FT 2.0). Evaluating Fragmentation Query provided for evaluating logical fragmentation. Loading Data Additional options for CI table loads. MCR Additional detail and explanation of FTDW MCR Rating. Read white paper on fast track data warehousing. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Favorite Programmer Quotes…

    - by SGWellens
      "A computer once beat me at chess, but it was no match for me at kick boxing." — Emo Philips   "There are only 10 types of people in the world, those who understand binary and those who don't. " – Unknown.   "Premature optimization is the root of all evil." — Donald Knuth   "I should have become a doctor; then I could bury my mistakes." — Unknown   "Code softly and carry a large backup thumb drive." — Me   "Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live." — Martin Golding   "DDE…the protocol from hell"— Charles Petzold   "Just because a thing is new don't mean that it's better" — Will Rogers   "The mark of a mature programmer is willingness to throw out code you spent time on when you realize it's pointless." — Bram Cohen   "A good programmer is someone who looks both ways before crossing a one-way street." — Doug Linder   "The early bird may get the worm but it's the second mouse that gets the cheese." — Unknown   I hope someone finds this amusing. Steve Wellens CodeProject

    Read the article

  • Pulseaudio is no longer working in Debian Squeeze: 'Failed to open module "module-combine-sink": file not found'

    - by mattalexx
    I'm having a problem with pulseaudio. My machine crashed, and when I rebooted and ran pavucontrol, I got a "Connection Failed: Connection refused" dialog. When I run pulseaudio --log-level=info --log-target=stderr from the command line, I get the following output: [...] I: alsa-util.c: Error opening PCM device front:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device front:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: card.c: Created 0 "alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" I: alsa-sink.c: Successfully opened device front:1. I: alsa-sink.c: Selected mapping 'Analog Stereo' (analog-stereo). I: alsa-sink.c: Successfully enabled mmap() mode. I: alsa-sink.c: Successfully enabled timer-based scheduling mode. I: (alsa-lib)control.c: Invalid CTL front:1 I: alsa-mixer.c: Unable to attach to mixer front:1: No such file or directory I: alsa-mixer.c: Successfully attached to mixer 'hw:1' W: alsa-mixer.c: Your kernel driver is broken: it reports a volume range from 0.00 dB to 0.00 dB which makes no sense. I: module-device-restore.c: Restoring volume for sink alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo. I: sink.c: Created sink 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right I: sink.c: alsa.resolution_bits = "16" I: sink.c: device.api = "alsa" I: sink.c: device.class = "sound" I: sink.c: alsa.class = "generic" I: sink.c: alsa.subclass = "generic-mix" I: sink.c: alsa.name = "USB Audio" I: sink.c: alsa.id = "USB Audio" I: sink.c: alsa.subdevice = "0" I: sink.c: alsa.subdevice_name = "subdevice #0" I: sink.c: alsa.device = "0" I: sink.c: alsa.card = "1" I: sink.c: alsa.card_name = "DigiHug USB Audio" I: sink.c: alsa.long_card_name = "FiiO DigiHug USB Audio at usb-0000:00:1a.0-1.2, full speed" I: sink.c: alsa.driver_name = "snd_usb_audio" I: sink.c: device.bus_path = "pci-0000:00:1a.0-usb-0:1.2:1.1" I: sink.c: sysfs.path = "/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1" I: sink.c: udev.id = "usb-FiiO_DigiHug_USB_Audio-01-Audio" I: sink.c: device.bus = "usb" I: sink.c: device.vendor.id = "1852" I: sink.c: device.vendor.name = "GYROCOM C&C Co., LTD" I: sink.c: device.product.id = "7022" I: sink.c: device.product.name = "DigiHug_USB_Audio" I: sink.c: device.serial = "FiiO_DigiHug_USB_Audio" I: sink.c: device.string = "front:1" I: sink.c: device.buffering.buffer_size = "352800" I: sink.c: device.buffering.fragment_size = "176400" I: sink.c: device.access_mode = "mmap+timer" I: sink.c: device.profile.name = "analog-stereo" I: sink.c: device.profile.description = "Analog Stereo" I: sink.c: device.description = "DigiHug_USB_Audio Analog Stereo" I: sink.c: alsa.mixer_name = "USB Mixer" I: sink.c: alsa.components = "USB1852:7022" I: sink.c: module-udev-detect.discovered = "1" I: sink.c: device.icon_name = "audio-card-usb" I: source.c: Created source 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo.monitor" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right I: source.c: device.description = "Monitor of DigiHug_USB_Audio Analog Stereo" I: source.c: device.class = "monitor" I: source.c: alsa.card = "1" I: source.c: alsa.card_name = "DigiHug USB Audio" I: source.c: alsa.long_card_name = "FiiO DigiHug USB Audio at usb-0000:00:1a.0-1.2, full speed" I: source.c: alsa.driver_name = "snd_usb_audio" I: source.c: device.bus_path = "pci-0000:00:1a.0-usb-0:1.2:1.1" I: source.c: sysfs.path = "/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1" I: source.c: udev.id = "usb-FiiO_DigiHug_USB_Audio-01-Audio" I: source.c: device.bus = "usb" I: source.c: device.vendor.id = "1852" I: source.c: device.vendor.name = "GYROCOM C&C Co., LTD" I: source.c: device.product.id = "7022" I: source.c: device.product.name = "DigiHug_USB_Audio" I: source.c: device.serial = "FiiO_DigiHug_USB_Audio" I: source.c: device.string = "1" I: source.c: module-udev-detect.discovered = "1" I: source.c: device.icon_name = "audio-card-usb" I: alsa-sink.c: Using 2.0 fragments of size 176400 bytes (1000.00ms), buffer size is 352800 bytes (2000.00ms) I: alsa-sink.c: Time scheduling watermark is 20.00ms I: alsa-sink.c: Hardware volume ranges from 0 to 110. I: alsa-sink.c: Using hardware volume control. Hardware dB scale not supported. I: alsa-sink.c: Using hardware mute control. I: core-util.c: Successfully enabled SCHED_RR scheduling for thread, with priority 5. I: alsa-sink.c: Starting playback. I: module.c: Loaded "module-alsa-card" (index: #4; argument: "device_id="1" name="usb-FiiO_DigiHug_USB_Audio-01-Audio" card_name="alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" tsched=yes ignore_dB=no card_properties="module-udev-detect.discovered=1""). I: module-udev-detect.c: Card /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1 (alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio) module loaded. I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: alsa-util.c: Device hw:2 doesn't support 44100 Hz, changed to 8000 Hz. I: alsa-util.c: Failed to set hardware parameters on plug:front:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:hw:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:2: Invalid argument I: module-card-restore.c: Restoring profile for card alsa_card.usb-046d_08d7-01-U0x46d0x8d7. I: card.c: Created 1 "alsa_card.usb-046d_08d7-01-U0x46d0x8d7" I: module.c: Loaded "module-alsa-card" (index: #5; argument: "device_id="2" name="usb-046d_08d7-01-U0x46d0x8d7" card_name="alsa_card.usb-046d_08d7-01-U0x46d0x8d7" tsched=yes ignore_dB=no card_properties="module-udev-detect.discovered=1""). I: module-udev-detect.c: Card /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.6/1-1.6:1.1/sound/card2 (alsa_card.usb-046d_08d7-01-U0x46d0x8d7) module loaded. I: module-udev-detect.c: Found 3 cards. I: module.c: Loaded "module-udev-detect" (index: #6; argument: ""). I: module.c: Loaded "module-esound-protocol-unix" (index: #7; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'alsa_output.pci-0000_00_1b.0.analog-surround-41' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'alsa_output.pci-0000_00_1b.0.analog-surround-41.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). I: module.c: Loaded "module-always-sink" (index: #11; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #12; argument: ""). I: module.c: Loaded "module-suspend-on-idle" (index: #13; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session2" I: module.c: Loaded "module-console-kit" (index: #14; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #15; argument: ""). I: module.c: Loaded "module-cork-music-on-phone" (index: #16; argument: ""). E: module.c: Failed to open module "module-combine-sink": file not found E: main.c: Module load failed. E: main.c: Failed to initialize daemon. I: module.c: Unloading "module-device-restore" (index: #0). I: module.c: Unloaded "module-device-restore" (index: #0). I: module.c: Unloading "module-stream-restore" (index: #1). I: module.c: Unloaded "module-stream-restore" (index: #1). I: module.c: Unloading "module-card-restore" (index: #2). I: module.c: Unloaded "module-card-restore" (index: #2). I: module.c: Unloading "module-augment-properties" (index: #3). I: module.c: Unloaded "module-augment-properties" (index: #3). I: module.c: Unloading "module-alsa-card" (index: #4). I: sink.c: Freeing sink 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo" I: source.c: Freeing source 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo.monitor" I: card.c: Freed 0 "alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" I: module.c: Unloaded "module-alsa-card" (index: #4). I: module.c: Unloading "module-alsa-card" (index: #5). I: card.c: Freed 1 "alsa_card.usb-046d_08d7-01-U0x46d0x8d7" I: module.c: Unloaded "module-alsa-card" (index: #5). I: module.c: Unloading "module-udev-detect" (index: #6). I: module.c: Unloaded "module-udev-detect" (index: #6). I: module.c: Unloading "module-esound-protocol-unix" (index: #7). I: module.c: Unloaded "module-esound-protocol-unix" (index: #7). I: module.c: Unloading "module-native-protocol-unix" (index: #8). I: module.c: Unloaded "module-native-protocol-unix" (index: #8). I: module.c: Unloading "module-default-device-restore" (index: #9). I: module.c: Unloaded "module-default-device-restore" (index: #9). I: module.c: Unloading "module-rescue-streams" (index: #10). I: module.c: Unloaded "module-rescue-streams" (index: #10). I: module.c: Unloading "module-always-sink" (index: #11). I: module.c: Unloaded "module-always-sink" (index: #11). I: module.c: Unloading "module-intended-roles" (index: #12). I: module.c: Unloaded "module-intended-roles" (index: #12). I: module.c: Unloading "module-suspend-on-idle" (index: #13). I: module.c: Unloaded "module-suspend-on-idle" (index: #13). I: module.c: Unloading "module-console-kit" (index: #14). I: client.c: Freed 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session2" I: module.c: Unloaded "module-console-kit" (index: #14). I: module.c: Unloading "module-position-event-sounds" (index: #15). I: module.c: Unloaded "module-position-event-sounds" (index: #15). I: module.c: Unloading "module-cork-music-on-phone" (index: #16). I: module.c: Unloaded "module-cork-music-on-phone" (index: #16). I: main.c: Daemon terminated. I believe the relevant part is this: E: module.c: Failed to open module "module-combine-sink": file not found E: main.c: Module load failed. E: main.c: Failed to initialize daemon. I tried uninstalling and reinstalling pulseaudio, I tried to find a way to install module-combine-sink. Nothing worked. I'm on a Debian Squeeze 32-bit machine. What can I do to fix this?

    Read the article

  • Windows Azure AppFabric: ServiceBus Queue WPF Sample

    - by xamlnotes
    The latest version of the AppFabric ServiceBus now has support for queues and topics. Today I will show you a bit about using queues and also talk about some of the best practices in using them. If you are just getting started, you can check out this site for more info on Windows Azure. One of the 1st things I thought if when Azure was announced back when was how we handle fault tolerance. Web sites hosted in Azure are no much of an issue unless they are using SQL Azure and then you must account for potential fault or latency issues. Today I want to talk a bit about ServiceBus and how to handle fault tolerance.  And theres stuff like connecting to the servicebus and so on you have to take care of. To demonstrate some of the things you can do, let me walk through this sample WPF app that I am posting for you to download. To start off, the application is going to need things like the servicenamespace, issuer details and so forth to make everything work.  To facilitate this I created settings in the wpf app for all of these items. Then I mapped a static class to them and set the values when the program loads like so: StaticElements.ServiceNamespace = Convert.ToString(Properties.Settings.Default["ServiceNamespace"]); StaticElements.IssuerName = Convert.ToString(Properties.Settings.Default["IssuerName"]); StaticElements.IssuerKey = Convert.ToString(Properties.Settings.Default["IssuerKey"]); StaticElements.QueueName = Convert.ToString(Properties.Settings.Default["QueueName"]);   Now I can get to each of these elements plus some other common values or instances directly from the StaticElements class. Now, lets look at the application.  The application looks like this when it starts:   The blue graphic represents the queue we are going to use.  The next figure shows the form after items were added and the queue stats were updated . You can see how the queue has grown: To add an item to the queue, click the Add Order button which displays the following dialog: After you fill in the form and press OK, the order is published to the ServiceBus queue and the form closes. The application also allows you to read the queued items by clicking the Process Orders button. As you can see below, the form shows the queued items in a list and the  queue has disappeared as its now empty. In real practice we normally would use a Windows Service or some other automated process to subscribe to the queue and pull items from it. I created a class named ServiceBusQueueHelper that has the core queue features we need. There are three public methods: * GetOrCreateQueue – Gets an instance of the queue description if the queue exists. if not, it creates the queue and returns a description instance. * SendMessageToQueue = This method takes an order instance and sends it to the queue. The call to the queue is wrapped in the ExecuteAction method from the Transient Fault Tolerance Framework and handles all the retry logic for the queue send process. * GetOrderFromQueue – Grabs an order from the queue and returns a typed order from the queue. It also marks the message complete so the queue can remove it.   Now lets turn to the WPF window code (MainWindow.xaml.cs). The constructor contains the 4 lines shown about to setup the static variables and to perform other initialization tasks. The next few lines setup certain features we need for the ServiceBus: TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(StaticElements.IssuerName, StaticElements.IssuerKey); Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", StaticElements.ServiceNamespace, string.Empty); StaticElements.CurrentNamespaceManager = new NamespaceManager(serviceUri, credentials); StaticElements.CurrentMessagingFactory = MessagingFactory.Create(serviceUri, credentials); The next two lines update the queue name label and also set the timer to 20 seconds.             QueueNameLabel.Content = StaticElements.QueueName;             _timer.Interval = TimeSpan.FromSeconds(20);             Next I call the UpdateQueueStats to initialize the UI for the queue:             UpdateQueueStats();             _timer.Tick += new EventHandler(delegate(object s, EventArgs a)                         {                      UpdateQueueStats();                  });             _timer.Start();         } The UpdateQueueStats method shown below. You can see that it uses the GetOrCreateQueue method mentioned earlier to grab the queue description, then it can get the MessageCount property.         private void UpdateQueueStats()         {             _queueDescription = _serviceBusQueueHelper.GetOrCreateQueue();             QueueCountLabel.Content = "(" + _queueDescription.MessageCount + ")";             long count = _queueDescription.MessageCount;             long queueWidth = count * 20;             QueueRectangle.Width = queueWidth;             QueueTickCount += 1;             TickCountlabel.Content = QueueTickCount.ToString();         }   The ReadQueueItemsButton_Click event handler calls the GetOrderFromQueue method and adds the order to the listbox. If you look at the SendQueueMessageController, you can see the SendMessage method that sends an order to the queue. Its pretty simple as it just creates a new CustomerOrderEntity instance,fills it and then passes it to the SendMessageToQueue. As you can see, all of our interaction with the queue is done through the helper class (ServiceBusQueueHelper). Now lets dig into the helper class. First, before you create anything like this, download the Transient Fault Handling Framework. Microsoft provides this free and they also provide the C# source. Theres a great article that shows how to use this framework with ServiceBus. I included the entire ServiceBusQueueHelper class in List 1. Notice the using statements for TransientFaultHandling: using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; The SendMessageToQueue in Listing 1 shows how to use the async send features of ServiceBus with them wrapped in the Transient Fault Handling Framework.  It is not much different than plain old ServiceBus calls but it sure makes it easy to have the fault tolerance added almost for free. The GetOrderFromQueue uses the standard synchronous methods to access the queue. The best practices article walks through using the async approach for a receive operation also.  Notice that this method makes a call to Receive to get the message then makes a call to GetBody to get a new strongly typed instance of CustomerOrderEntity to return. Listing 1 using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Xml.Serialization; using System.Diagnostics; namespace WPFServicebusPublishSubscribeSample {     class ServiceBusQueueHelper     {         RetryPolicy currentPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);         QueueClient currentQueueClient;         public QueueDescription GetOrCreateQueue()         {                        QueueDescription queue = null;             bool createNew = false;             try             {                 // First, let's see if a queue with the specified name already exists.                 queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 createNew = (queue == null);             }             catch (MessagingEntityNotFoundException)             {                 // Looks like the queue does not exist. We should create a new one.                 createNew = true;             }             // If a queue with the specified name doesn't exist, it will be auto-created.             if (createNew)             {                 try                 {                     var newqueue = new QueueDescription(StaticElements.QueueName);                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.CreateQueue(newqueue); });                 }                 catch (MessagingEntityAlreadyExistsException)                 {                     // A queue under the same name was already created by someone else,                     // perhaps by another instance. Let's just use it.                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 }             }             currentQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName);             return queue;         }         public void SendMessageToQueue(CustomerOrderEntity Order)         {             BrokeredMessage msg = null;             GetOrCreateQueue();             // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.             currentPolicy.ExecuteAction             (                 (cb) =>                 {                     // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not                     // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.                     msg = new BrokeredMessage(Order);                     // Send the event asynchronously.                     currentQueueClient.BeginSend(msg, cb, null);                 },                 (ar) =>                 {                     try                     {                         // Complete the asynchronous operation.                         // This may throw an exception that will be handled internally by the retry policy.                         currentQueueClient.EndSend(ar);                     }                     finally                     {                         // Ensure that any resources allocated by a BrokeredMessage instance are released.                         if (msg != null)                         {                             msg.Dispose();                             msg = null;                         }                     }                 },                 (ex) =>                 {                     // Always dispose the BrokeredMessage instance even if the send                     // operation has completed unsuccessfully.                     if (msg != null)                     {                         msg.Dispose();                         msg = null;                     }                     // Always log exceptions.                     Trace.TraceError(ex.Message);                 }             );         }                 public CustomerOrderEntity GetOrderFromQueue()         {             CustomerOrderEntity Order = new CustomerOrderEntity();             QueueClient myQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName, ReceiveMode.PeekLock);             BrokeredMessage message;             ServiceBusQueueHelper serviceBusQueueHelper = new ServiceBusQueueHelper();             QueueDescription queueDescription;             queueDescription = serviceBusQueueHelper.GetOrCreateQueue();             if (queueDescription.MessageCount > 0)             {                 message = myQueueClient.Receive(TimeSpan.FromSeconds(90));                 if (message != null)                 {                     try                     {                         Order = message.GetBody<CustomerOrderEntity>();                         message.Complete();                     }                     catch (Exception ex)                     {                         throw ex;                     }                 }                 else                 {                     throw new Exception("Did not receive the messages");                 }             }             return Order;         }     } } I will post a link to the download demo in a separate post soon.

    Read the article

  • Service Level Loggin/Tracing

    - by Ahsan Alam
    We all love to develop services, right? First timers want to learn technologies like WCF and Web Services. Some simply want to build services; whereas, others may find services as natural architectural decision for particular systems. Whatever the reason might be, services are commonly used in building wide range of systems. Developers often encapsulates various functionality (small or big) within one or more services, and expose them for multiple applications. Sometimes from day one (and definitely over time) these services may evolve into a set of black boxes. Services or not, black boxes or not, issues and exceptions are sometimes hard to avoid, especially in highly evolving and transactional systems. We can try to be methodical with our unit testing, QA and overall process; but we may not be able to avoid some type of system issues. When issues arise from one or more highly transactional services, it becomes necessary to resolve them very quickly. When systems handle thousands of transaction in matter of hours, some issues may not surface immediately. That is when service level logging becomes very useful. Technologies such as WCF, allow us to enable service level tracing with minimal effort; but that may not provide us with complete picture. Developers may need to add tracing within critical areas of the code with various degrees of verbosity. Programmer can always utilize some logging framework such as the 'Logging Application Block' to get the job done. It may seem overkill sometimes; but I have noticed from my experience that service level logging helps programmer trace many issues very quickly.

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • Why is IaaS important in Azure&hellip;

    - by Steve Loethen
    Three weeks ago, Microsoft released the next phase of Azure.  I have had several clients waiting on this release.  The fact that they have been waiting and are now more receptive to looking at the cloud.  Customers expressed fear of the unknown.  And a fear of lack of control, even when that lack of control also means a huge degree of flexibility to innovate with concerns about the underlying infrastructure.  I think IaaS will be that “gateway drug” to get customers who have been hesitant to take another look at the cloud.  The dialog can change from the cloud being this big scary unknown to a resource for workloads.  The conversations should have always been, and can know be even stronger, geared toward the following points: 1) The cloud is not unicorns and glitter, the cloud is resources.  Compute, storage, db’s, services bus, cache…..  Like many of the resources we have on-premise.  Not magic, just another resource with advantages and obstacles like any other resource. 2) The cloud should be part of the conversation for any new project.  All of the same criteria should be applied, on-premise or off.  Cost, security, reliability, scalability, speed to deploy, cost of licenses, need to customize image, complex workloads.  We have been having these discussions for years when we talk about on-premise projects.  We make decisions on OS’s, Databases, ESB’s, configuration and products based on a myriad of factors.  We use the same factors but now we have a additional set of resources to consider in our process. 3) The cloud is a great solution looking for some interesting problems.  It is our job to recognize the right problems that fit into the cloud, weigh the factors and decide what to do. IaaS makes this discussion easier, offers more choices, and often choices that many enterprises will find more better than PaaS.  Looking forward to helping clients realize the power of the cloud.

    Read the article

  • West Palm Beach Dev Group August 2012 Meeting Recap

    - by Sam Abraham
    As the saying goes, it’s better late than never. Such is the case with my overdue West Palm Beach Dev Group August 2012 meeting report. Our August meeting was full of both knowledge and adventure. It comes as no surprise that the knowledge was brought to us by our favorite DotNetNuke Technical Evangelist, Will Strohl. Will introduced and thoroughly presented the new social features in DNN 6.2. Unfortunately, our meeting date coincided with Hurricane Isaac having just passed us by. Aside from road closures and floods that kept public schools closed for two days, our meeting host, PC Professor, had to close the school the day of our meeting on a short notice due to flooding which we found out about at midnight on the day of the event.  This left us scrambling to find an available alternate meeting location close enough to our original venue. Cancelling the meeting was always an option, but we opted to keep it as the very last resort. Luckily, we were fortunate to find a meeting room at the Hampton Inn only a few minutes away from our original location. Having heard of our challenge, our event sponsor, Applied Innovations, stepped-in and covered the meeting room cost in addition to the food and beverages. We would like to thank our volunteers and sponsors who made that event a success: Jess Coburn, CEO and Cara Pluff, Director of Sales at Applied Innovations, Dave Noderer for suggesting the alternate venue and Venkat Subramanian for his hard work keeping our members informed of the venue change and for being our event photographer.   We look forward to seeing you at our upcoming meetings: -September 25th, 2012 with Jonas Stawski, Microsoft MVP -October 23rd, 2012 with our Microsoft Developer Evangelist, Joe “DevFish” Healy -Ending an exciting year will be our November 27th meeting with Dycom Industries’ Senior Software Developer, Tom Huynh.   All the best, --Sam

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • We have no SW Firewall behind our office HW firewall, admin says its not req'd

    - by Makach
    I've recently changed jobs and I've been set up with a new workstation. On all previous places where I've been working they've had some sort of local firewall installed on each and every workstation - but here I've been told not to activate it because it is not necessary since we're already behind a HW Firewall. To me this seem a bit naïve, but I cannot emphasise it. I always thought a local firewall was good practice, ie. if something managed to come through the hw firewall there might be a slight chance other computers on the lan would block the internal threath. We got free access to internet and we got a virus checker installed.

    Read the article

  • SQLAuthority News – TechED India 2012 – Bangalore – March 21-23, 2012

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. This year TechEd India 2012 will be held in Bangalore between March 21 and 23. There will be three 3 days of lots of learning and fun. If you are data professional, you are going to find yourself very very fortunate as every single day we will have data track for various audience. Day 1 will be for developer, Day 2 will be for Architect and Day 3 will be for Database Administrators. Every day we will have plenty of learning from industries leading experts. How many of you know that the first TechEd was held in 1993 in Orlando, FL? Well, there are many similar interesting information is available on Wiki page for TechEd. I will be presenting on my favorite subject of performance tuning. Just like every other time this time the session will be unique and different. I will bring something lesser known but very important aspect of the performance tuning to the light. Besides SQL Server we will be covering lots of other technologies such as Windows 8, Windows Phone, Windows Azure, Visual Studio, System Center, Security, Private Cloud etc. The biggest attraction of the TechEd is Keynote and Demo Extravaganza. One can not miss either of them when present at TechEd India. If you are attending TechEd India – I am looking forward to meet you in person. It is always pleasant to meet community face to face and I promise to remember your name. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • LCD monitor flickering

    - by stickmangumby
    I've recently got an LCD monitor and it is occasionally 'flickering' colors very noticeably. It's not a new monitor, but the person I'm borrowing it from hasn't had any trouble with it. The flickering occurs across operating systems and screen resolutions. I'm pretty sure it's related to dirty power - it often happens when fluorescent lights get turned on or the fridge starts, but not always. Is this likely to be the cause of the problem? Is there any good way to test this? Is there anything that can be done about bad grounding?

    Read the article

  • Google privacy concerns: trustworthy alternatives for migration?

    - by Markos Fragkakis
    I have come to realize the tremendous amount of information Google has on its users. I am a typical Google user, using Gmail, Google Reader. This means that right now Google now has the following information at its disposal: Who my friends are (Gmail) What we talk about (Gmail, Google Talk) What news sources I follow (Google Reader) How frequently I check them and which ones I consider important enough to share (Google Reader) A lot of other stuff What I search about and when (if I search when logged in) (Web search) I have no reason to believe that this information is used for reasons other than adjusting what ads I am displayed when I visit a site with Google Ads. However, I have realised that I am in no position to be certain that this is absolutely true, or that it always will be. On the other hand, I don't want to reach the uber-privacy-maniac state of maintaining my own email server and installing a desktop RSS reader in all my machines. So, I am asking for your opinions: What services constitute a good set of alternatives to the Google services, promising better privacy? Pros: Privacy Free Powerful Usable

    Read the article

  • Desktop Fun: Sci-Fi Icons Packs Series 2

    - by Asian Angel
    If you loved our first sci-fi icon packs collection then get ready for more icon goodness with the selection in our second sci-fi series. Sneak Preview As always we have an example desktop full of icon goodness to share with you. Here you can see a Star Trek themed desktop using the “Borg-green” set shown below. Note: Wallpaper can be found here. Our new desktop icons up close… Borg-green *.png format only Download Trek Insignia *.ico format only Download Star Trek Elite Force X *.ico format only Download Starships X *.ico format only Download If I Were A Thief In The 24th Century 1.0 *.ico format only Download Star Wars: Attack of the Clones *.ico format only Download BSG: Frakking Toasters *.ico format only Download Doctor Who *.ico format only Download TRON *.ico format only Download Alien vs Predator Icons *.ico and .png format Download 2001: A Space Odyssey 1.0 *.ico format only Download To the Moon *.ico format only, also has bonus set of wallpapers included! This is what the bonus wallpaper looks like…it comes in the following sizes: 1024*768, 1280*854, 1280*1024, 1440*900, 1600*1200, & 1920*1200. Download Space Icons *.ico and .png format Download Matrix Documentations *.ico format only Download Matrix Rebooted *.ico format only Download If you loved this collection of sci-fi icons then head on over to see our first sci-fi series here. Also, be certain to visit our new Desktop Fun section for more customization goodness! Similar Articles Productive Geek Tips Desktop Customization: Sci-Fi Icon PacksRestore Missing Desktop Icons in Windows 7 or VistaDesktop Fun: Adventure Icon PacksDesktop Fun: Star Trek WallpapersCreate a Keyboard Shortcut to Access Hidden Desktop Icons and Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • Formatting Dates, Times and Numbers in ASP.NET

    Formatting is the process of converting a variable from its native type into a string representation. Anytime you display a DateTime or numeric variables in an ASP.NET page, you are formatting that variable from its native type into some sort of string representation. How a DateTime or numeric variable is formatted depends on the culture settings and the format string. Because dates and numeric values are formatted differently across cultures, the .NET Framework bases its formatting on the specified culture settings. By default, the formatting routines use the culture settings defined on the web server, but you can indicate that a particular culture be used anytime you format. In addition to the culture settings, formatting is also affected by a format string, which spells out the formatting details to apply. The .NET Framework contains a bounty of format strings. There are standard format strings, which are typically a single letter that applies detailed formatting logic. For example, the "C" format specifier will format a numeric type as a currency value; the "Y" format specifier displays the month name and four-digit year of the specified DateTime value. There are also custom format strings, which display a apply a very specific formatting rule. These custom format strings can be put together to build more intricate formats. For instance, the format string "dddd, MMMM d" displays the full day of the week name followed by a comma followed by the full name of the month followed by the day of the month. For more involved formatting scenarios, where neither the standard or custom format strings cut the mustard, you can always create your own formatting extension methods. This article explores the standard format strings for dates, times and numbers and includes a number of custom formatting methods I've created and use in my own projects. There's also a demo application you can download that lets you specify a culture and then shows you the output for the standard format strings for the selected culture. Read on to learn more! Read More >

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • //TODO: Test this thoroughly!!!!!!

    - by Edward Boyle
    I just ran into an ugly sight in my code: //TODO: Test this thoroughly!!!!!! private void ... I would very much like to go back in time and ask the past me what I meant, why did I add that TODO:? …And then, smack the s%#t out of him. No matter how much testing I do of this code I will always wonder if the past me found something. Was it actually that code or was it a calling method that may bring unwanted results. The fact that I find absolutely nothing wrong with the code makes it that much more haunting. The moral of the story; when you find something wrong and need to test it thoroughly, stay up another hour testing it. The clarity in your head at that moment, on that issue, at that specific moment in time, would take hours worth of commenting to justify not finishing it now. Maybe what I meant was: // TODO: Test this thoroughly!!!!!! // All seems fine but test it just in case, not to worry. private void ... Doubt it. -I’m screwed.

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • LAMP Stack Versioning -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • IPSec VPN's being dropped by router and will not re-establish

    - by David Gard
    We have 3 sites, with our two remote sites connection to head office via LAN-to-LAN VPN's. All 3 sites use DrayTek 2900's with firware version v3.3.1.1_RC2 (this is a release candidate that DrayTek suggested I try, but sadly it made no difference). The only way to re-establish the VPN's once they have been dropped is to restart the router. Head office is set to dial out to both sites, with both the 'Always on' and 'Enable PING to keep alive' (pinging a server in the remote offices) options ticked. However, at random intervals the VPN's drop, logging IKE_RELEASE VPN : Dial-out Profile Index = 7, Name = Shepton (for one connection, and '6' & 'Wincanton' for the other connection). I first tried swapping the router with one at another site, and then had all three replaced, but that failed to solve the problem. Is anyone aware of anything that could cause the VPN's to drop randomly like this? Thanks.

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >