Search Results

Search found 46416 results on 1857 pages for 'access log'.

Page 379/1857 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • You Might Be a DBA

    - by BuckWoody
    With all apologies to Jeff Foxworthy, I was up late Friday night on a holiday weekend (which translated into T-SQL becomes “Maintenance Window”) and I got bored in between the two or three minutes I had between clicks. So I started a “Twitter” meme – and it just took off. I haven’t cleaned these up much, but here, in author order as of Saturday the 29th of May is the list “You might be a DBA” from around the Twitterverse: buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have children older than the rest of your team #youmightbeaDBA  buckwoody You and the Oracle DBA would kill each other, but you'll happily fight off a developer together first #youmightbeaDBA  buckwoody You've threatened to quit if they give anyone the sa password on production #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You refer to co-workers as "carbon-units" #youmightbeaDBA  buckwoody Being paranoid is on your resume at the top #youmightbeaDBA  buckwoody Everyone comes to your cube to find the MSDN DVD's #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You've worn down developers to get your way by explaining normalization levels #youmightbeaDBA  buckwoody You refer to clothes as "Data Abstractions" #youmightbeaDBA  buckwoody Users pester you to be able to put data in a database, then they pester you to take it out and put it in Excel #youmightbeaDBA  buckwoody Others try to de-duplicate data, you try to copy it to more than three locations #youmightbeaDBA  buckwoody You have at least one DLT tape in the trunk of your car #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody Your spouse knows what "ETL" means #youmightbeaDBA  buckwoody You've referred to yourself as the "Data Janitor" #youmightbeaDBA  buckwoody You don't have positive connotations of the word "upgrade" #youmightbeaDBA  buckwoody You get your coffee before you check your servers, because you know you won't get any if you don't #youmightbeaDBA  buckwoody You always come to work through the back door so no one hijacks you on the way to your cube #youmightbeaDBA  buckwoody You check your server logs before you check your e-mail in the morning so you can reply "Yeah, I already fixed that." #youmightbeaDBA  buckwoody You have more conference badges than clean socks #youmightbeaDBA  buckwoody Your coffee mug says "It depends" #youmightbeaDBA  buckwoody You can convince a boss that you need 16GB of RAM in your laptop #youmightbeaDBA  buckwoody You've used ebay to find production equipment #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You know when your company is acquiring another even before the CFO #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You call aspirin "work vitamins" #youmightbeaDBA  buckwoody You get the same amount of sleep even after you have a child #youmightbeaDBA  buckwoody You obsess about performance metrics from over one year ago #youmightbeaDBA  buckwoody The first thing you buy after the database software is aftermarket tools to manage the database software #youmightbeaDBA  buckwoody You've tried to convince someone else to become a DBA #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody You only know other DBA's by their Tweet Handle #youmightbeaDBA  buckwoody You've explained the difference between 32 and 64-bit to more than one manager in terms they can understand, using puppets #youmightbeaDBA  buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody You've driven to the Datacenter to install SQL Server because "you don't trust those NOC admins" #youmightbeaDBA  buckwoody You pay more for faster Internet connections than cable at home so you don't have to drive in #youmightbeaDBA  buckwoody You call texting a "queuing system" #youmightbeaDBA  buckwoody You know that if someone can read Perl, they manage an Oracle system #youmightbeaDBA  buckwoody You have an e-mail rule for backup notifications #youmightbeaDBA  buckwoody Your food pyramid includes coffee, salt and fat #youmightbeaDBA  buckwoody You wish everything had a graphical query plan #youmightbeaDBA  buckwoody You refactor your e-mails #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody You've written blog posts on technology you've never actually implemented in production #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody @MidnightDBA Click the #youmightbeaDBA tag. I've had WAY too much coffee today.  buckwoody There is no other position that is 1-deep except you and the CEO #youmightbeaDBA  buckwoody When you watch "The Office" you call it "OJT" #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody Your blog would make a "best practices" or "worst practices" book #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody The first thing you install on your netbook is SSMS #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody Your watch is set to UTC because it's just easier #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody There are just as many females in your technical field as males #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody You refer to time as a "Clustered Index" #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You think a millisecond is a really long time #youmightbeaDBA  buckwoody You're sitting and typing #youmightbeaDBA when you could be outside #youmightbeaDBA  buckwoody You can't wait for a technical conference so you can wear a kilt - and you're not Scottish #youmightbeaDBA  buckwoody You know that "DBA" stands for "Default Blame Acceptor" #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You know what "the truth, thole truth and nothing but the truth, so help me Codd" means #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You can't talk fast enough to get a concept out of your head so you tweet it instead #youmightbeaDBA  buckwoody You cry when someone doesn't use a WHERE clause #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You think "Set theory" is not an verb but a noun #youmightbeaDBA  buckwoody You try to convince random strangers to vote on your Connect item #youmightbeaDBA  buckwoody You think 3 hours of contiguous sleep is a good thing #youmightbeaDBA or #youmightbeamother  buckwoody You don't like Oracle, and not just because of what she did to Neo #youmightbeaDBA  buckwoody You know when to say "sequel" and "s-q-l" #youmightbeaDBA  buckwoody You know where the data is #youmightbeaDBA  buckwoody You refer to your children as "Fully Redundant Mirrors" #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody Your laptop is more powerful than the servers in most companies - including your own #youmightbeaDBA  buckwoody You capitalize SELECTed words #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You cringe in public when the word "upgrade" is used in a sentence #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody All Data Is MetaData means something to you #youmightbeaDBA  buckwoody You've never seen the driveway to your house in the daylight #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody Most of your bloodstream is composed of caffeine #youmightbeaDBA  buckwoody Your task list is labeled "CRUD Matrix" #youmightbeaDBA  buckwoody You call your wife/husband a "Linked Server" #youmightbeaDBA  anonythemouse When someone tells you they are going to take a dump and you wonder of which database then #youmightbeaDBA  anonythemouse When it's 11pm on a holiday weekend and you are working #youmightbeaDBA  anonythemouse When you sit down at a table and look for it's primary key #youmightbeaDBA  anonythemouse When getting milk from the fridge you check the expiry date is > getdate() #youmightbeaDBA  blakmk when you wake up dreaming about sql #youmightbeaDBA  CharlesGarver You think a @buckwoody bobblehead would be a cool thing to have on the dashboard of your car #youmightbeaDBA  CharlesGarver Your friends don't understand why you think there's a difference between single and double quotes #youmightbeaDBA  CharlesGarver Even the newest employees know your name from all the downtime notices you've sent out #youmightbeaDBA  CharlesGarver You sometimes feel anxious and think "I should test restoring those backups" and then the feeling passes #youmightbeadba  CharlesGarver You know what a co-worker means when they ask "how is your squirrel server?" #youmightbeadba  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You're willing to move someone's job up in priority for a box of #voodoodonuts #youmightbeaDBA  CharlesGarver Each person in your company seems to think you work for THEM #youmightbeaDBA  CharlesGarver You have a Love/Hate relationship going on with #Microsoft #youmightbeaDBA  CharlesGarver People ask you to troubleshoot their Access program #youmightbeaDBA  CharlesGarver The first words you hear in the morning are 'your voicemail box is full' #youmightbeaDBA  CharlesGarver The thought of disrupting 500 people's work so you can do something doesn't phase you #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver Your home computer is backed up in 3 different places #youmightbeaDBA  CharlesGarver Your wardrobe for work includes pajamas #youmightbeaDBA  CharlesGarver Someone tells you to look in the INDEX and you look puzzled before finally going to the back of the book. #youmightbeaDBA  chuckboycejr If you have ever set up a SQLAgent job to email your mobile phone to serve as an alarm clock #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd wrestle a SysAdmin to the ground to implement #DPA best practices as per @aspiringgeek #youmightbeaDBA  databaseguy I need to be up in 7 hours, so I'm off to bed! I'll have to read the rest of @buckwoody's #youmightbeaDBA posts in the AM. (g'night Buck!)  databaseguy When people ask you about your house, the first thing you describe is the network. #youmightbeaDBA  databaseguy The last thing you say at the office each day is, "is anybody else here? I'm shutting off the lights!" #youmightbeaDBA  databaseguy Your blood pressure rises when you read application specs drafted by marketing. #youmightbeaDBA  databaseguy A good day at work is one when nobody pays you no mind. #youmightbeaDBA  databaseguy You care about latches and wait states. #youmightbeaDBA  databaseguy You have worked over 200 hours on a performance tuning project that required no application changes at all. #youmightbeaDBA  databaseguy The late-night security guard knows the names of your spouse and kids. #youmightbeaDBA  databaseguy You have had vigorous debates about whether it should be pronounced "sequel" or "ess-queue-ell". #youmightbeaDBA  databaseguy You have VPN and RDP software installed on your phone ... just in case. #youmightbeaDBA  databaseguy You have edited a data file by hand, just to see what would happen. #youmightbeaDBA  databaseguy You decorate your office walls with database catalog posters. #youmightbeaDBA  databaseguy You've built programs that access data just to keep other developers from asking you to run queries all the time. #youmightbeaDBA  databaseguy When you watch movies like The Matrix, you find yourself calculating the fasibility of storing all that data. #youmightbeaDBA  databaseguy You have tried to convince someone to spend money on an SSD storage array. #youmightbeaDBA  databaseguy When CPU is spiked on a server, you want to gather forensic evidence. #youmightbeaDBA  databaseguy You have to remind developers not to push code to production without checking if the database is ready. #youmightbeaDBA  databaseguy Nobody cares what you wear to work, as long as the thing keeps running. #youmightbeaDBA  databaseguy Telepathy is a job requirement when working with app dev teams. #youmightbeaDBA  databaseguy You read database statistics for the educational value. #youmightbeaDBA  databaseguy And your boss freely admits this to anyone within earshot. #youmightbeaDBA  databaseguy Your boss cannot explain or understand what you do. #youmightbeaDBA  databaseguy You envision ERDs when you see a GUI. #youmightbeaDBA  databaseguy You say things like "applications come and go, but data lasts forever." #youmightbeaDBA  databaseguy You have memorized the names of several of the AdventureWorks employees. #youmightbeaDBA  databaseguy You know what MAXDOP setting you can get away with for a big query based on current server load. #youmightbeaDBA  databaseguy And you immediately recognize the recursion in my last tweet. #youmightbeaDBA  databaseguy You find 50 simultaneous tweets from @buckwoody about #youmightbeaDBA :O)  DBAishness You have "funny stories" about the times your developers accidentally deleted the T-log in their test environment. #youmightbeaDBA  DBAishness Planning to slice and dice your MDW data with PowerPivot makes you giggle like a schoolgirl. #youmightbeaDBA  donalddotfarmer You think @buckwoody lives in the "real world." #youmightbeaDBA  jamach09 @buckwoody #youmightbeaDBA Why go outside when you can sit in the nice cool server room?  jamach09 If you refer to procreation as "Replication", #youmightbeaDBA.  jamach09 If you think ORM is a four-letter word, #youmightbeaDBA  JamesMarsh If you have ever preached the value of Source Code Control, #YouMightBeADBA  jethrocarr @venzann You store your shopping list in a ACID compliant DB #youmightbeaDBA  joe_positive @buckwoody thought it stood for "Don't Bother Asking" #youmightbeaDBA  joe_positive when you check your IT Events Calendar before making weekend plans #youmightbeaDBA  LadyRuna You cringe whenever someone calls Excel a database #youmightbeaDBA  LadyRuna When the waiter says he'll be your server today, you ask how many terabytes he is #youmightbeaDBA  LadyRuna you always call the asterisk a "Star" #youmightbeaDBA  LadyRuna You walk into a server room, say "Nice RACK!" and everyone there knows you're talking about server rack... #youmightbeaDBA  LadyRuna You receive more messages from servers than from friends #youmightbeaDBA  LadyRuna hmmm... #youmightbeaDBA if your recipe for gumbo is "SELECT * FROM Refrigerator"  markjholmes @SQLSoldier Heh. #youmightbeaDBA if you correct other DBAs' spelling of @PaulRandal  markjholmes #youmightbeaDBA if you actually test RAID5 vs RAID10 on your SAN because when it comes to configuration, "it depends."  markjholmes #youmightbeaDBA if you have at least 3 definitions of the word "cluster"  MarlonRibunal 3 Words: @BrentO, snicker, & Access #youmightbeaDBA  MarlonRibunal @onpnt @mikeSQL my appeal was a couple of mins late. Enjoying #youmightbeaDBA  MarlonRibunal @mikeSQL @onpnt pls, don't mention bacon #youmightbeaDBA  merv @buckwoody You HATE 3-way joins #youmightbeaDBA  MidnightDBA If you're up at midnight Tweeting about SQL #youmightbeaDBA  MidnightDBA @buckwoody I'd noticed that. :) #youmightbeaDBA  mikeSQL when people talk about "their type" you're thinking varchar, bigint, binary, etc #youmightbeadba  mikeSQL people ask you to go to lunch , but you can't go because you're attending #SQLlunch #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mrdenny If you think that @buckwoody's demo using PowerPivot to analyze index usage data from DMVs is awesome then #youmightbeaDBA  mrdenny You wish @PaulRandal still worked at Microsoft so that they would make a bobble head of him #youmightbeadba  mrdenny When it's 11pm on a holiday weekend, and your posting stupid jokes on Twitter then #youmightbeadba  mrdenny If you go out with friends and wonder why no one's wearing a kilt then #YouMightBeADBA  mrdenny You can't do basic math, but you know off the top of your head how many CALs $14,412 can buy you. #YoumightbeaDBA  mrdenny If you've ever setup a SQL Job to email you to get you out of a regularly scheduled meeting #YouMightBeADBA.  mrdenny You throw up in your mouth a little when ever you here the word "Access". Even if it doesn't relate to a MS product. #YouMightBeADBA  msdtjones You spend more time listening to @buckwoody than your wife #youmightbeaDBA  NFDotCom You perform "hail deltas" on a regular basis. #YouMightBeADBA  NoelMcKinney If you tell your wife you want to go to Columbus Ohio for your wedding anniversary so you can attend #sqlsat42 then #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney Someone asks you to throw another log on the fire and you tell them not to worry about it because Autogrowth is turned on #youmightbeaDBA  Nuurdygirl Even if you have a girlfriend...its possible #youmightbeadba. Yeah-i said its possible!  Nuurdygirl When your girlfriend has to lean around the laptop to kiss you goodnight #youmightbeadba  Old_Man_Fish If you worry about how big your package is and how long it takes to finish #youmightbeaDBA  Old_Man_Fish If you no longer wonder if someone is in trouble or died if you are getting calls at 2AM #youmightbeaDBA  Old_Man_Fish If, when you hear the word ACCESS with no connotation you blood pressure jumps 50 points, #youmightbeaDBA  onpnt When you hear the word inject you immediately get concerned if your databases are OK #youmightbeaDBA  onpnt Your servers haven't been rebooted in a year #youmightbeaDBA  onpnt You know why it's funny when @PaulRandal has the word, "Sheep" in a tweet #youmightbeaDBA  onpnt You have read BOL without actually having a problem to figure out #youmightbeaDBA  onpnt You can type "SELECT columns FROM tables" without typos but tipen ni Banglish ares a messis #youmightbeaDBA  onpnt DR strategies doesn't include the word, RAID in them #youmightbeaDBA  onpnt you can move a SQL Server instance to a new server without the users ever knowing #youmightbeaDBA  onpnt You have made an SSIS package that is more than one step #youmightbeaDBA  onpnt You have the balls to say no to your boss when they ask for the sa password #youmightbeaDBA  onpnt you google to trouble shoot a problem and end up at your own blog (and it fixes it) #youmightbeaDBA  onpnt You talk your wife into moving the family vacation a week earlier so you can attend the areas local SSUG meeting #youmightbeaDBA  onpnt you can explain to a nontechnical person what a deadlock is #youmightbeaDBA  onpnt You hope a girl asks you what your collation is #youmightbeaDBA  onpnt you make jokes that include the words shrink, truncate and 1205. And you are the only one that laughs at them #youmightbeaDBA  onpnt You rate your ability to stay awake to work longer on blogs, twitter, forums and your day to day job with the 5 9's goal #youmightbeaDBA  onpnt you have major surgery and beg the doctor to release you back to work 5 days later because you miss your servers #youmightbeaDBA #TrueStory  onpnt You do have backups and you know how to use them #youmightbeaDBA  onpnt It's the network #youmightbeaDBA  onpnt When the developers get to work your mood changes rapidly #youmightbeaDBA  onpnt When someone says, "PASS", you first think of karaoke #youmightbeaDBA  onpnt Recruiters try to get you to call them *just* because they think you'll give them @BrentO contact info #youmightbeaDBA  onpnt You chuckle every time you go to grab the "CLR" Calcium, Lime and Rust Remover to clean something #youmightbeaDBA  onpnt @MarlonRibunal @mikeSQL Sorry man, it was already in motion ;-) #youmightbeaDBA  onpnt When you have an "I love bacon" sticker on your laptop. #youmightbeaDBA http://twitpic.com/1ry671  onpnt You sing SELECT statements in the shower #youmightbeaDBA  onpnt When you see a chicken it doesn't remind you of food. It reminds you of a guy named Jorge #youmightbeaDBA  onpnt At time, SQL is your mistress #youmightbeaDBA  onpnt Your wife wonders if SQL is the code name of your mistress at times #youmightbeaDBA  onpnt it's Friday and you are on twitter thinking really hard about what would be funny for hash tag #youmightbeaDBA  onpnt You organize your wife's "decorative"pillows on the bed in a B-Tree structure #youmightbeaDBA  PaulWhiteNZ If you: SELECT TOP (1) milk FROM fridge WHERE use_by_date >= GET_DATE() ORDER BY use_by_date ASC #YouMightBeaDBA  RonDBA #youmightbeaDBA if you read @buckwoody's and @BrentO's blogs.  ryaneastabrook @buckwoody omg, you have to stand up a website with these on them, they are awesome #youmightbeaDBA  soulvy @StrateSQL @LadyRuna Or a "Splat" #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer You retweet @buckwoody on a Friday night #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer Developers make you twitch #youmightbeaDBA  sqlagentman You know what X/1024*8 is. #YouMightBeADBA  SqlAsylum Your still in the office at 5:00 on memorial day weekend. #youmightbeadba :)  SQLBob Whenever someone you know gets pregnant you bring up INNER JOINs or SQL Injection attacks... #youmightbeaDBA  SQLChicken You know one or more SQL folks in the community with an animal in their username #youmightbeaDBA  SQLChicken You've used one or more car analogies to explain how a database works #youmightbeaDBA  SQLChicken “@sqljoe: #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you” lmao nice!  SQLChicken When talking about SSIS your discussions break down into various jokes about packages #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman You really own a "Stick of Much Developer Whacking" #YouMightBeADBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  sqljoe #youmightbeaDBA if you wished your wife knew T-sql. USE ShoppingList SELECT NecessaryItems from Supermarket WHERE Category<> ("junk food")  sqljoe #youmightbeaDBA if the first thing you kiss when you wake up is your mobile for not waking you up in the middle of the night  sqljoe #youmightbeaDBA if your wife has a "Do Not Fly" family vacation list of her own including your laptop and mobile  sqljoe #youmightbeaDBA if you have researched for DBA Anonymous groups and attended a #SSUG willing to drop your database (vice)  sqljoe #youmightbeaDBA if your only maintenance windows are staff meetings  sqljoe #youmightbeaDBA if you think of yourself as "The One" in The Matrix "balancing the equation" from The Architect's (developers) poor coding  sqljoe #youmightbeaDBA if you think @PaulRandal should have played the Oracle in The Matrix  sqljoe #youmightbeaDBA if home CD & Movie collection is stored in secured containers,in logical order & naming convention,and with a backup copy  sqljoe #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you  sqljoe #youmightbeaDBA if you have tried to TiVo @MidnightDBA broadcasts  sqljoe #youmightbeaDBA if your #sql user group feels like #AA meetings  sqljoe #youmightbeaDBA if you thought of bringing your #sql books to #sqlsaturday and #sqlpass for autographs  sqljoe #youmightbeaDBA if #sqlpass feels like the #oscars  sqljoe #youmightbeaDBA if you are proud of your small package  SQLLawman #youmightbeaDBA when you hear MDX and Acura is not first thought that comes to mind.  sqlrunner If your wife double checks that there isn't a SQLSat within 200 miles of your vacation destination #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner You treat the word 'access' as a verb, not a noun #youmightbeaDBA  sqlrunner If you are happy with sub-second performance #youmightbeaDBA  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner Your company set's up international phone coverage for your cruise #youmightbeaDBA  sqlsamson @buckwoody if your manager asks you for data and you respond with "there's a script for that" #youmightbeadba  sqlsamson @buckwoody If you receive more messages from your server then your spouse #youmightbeadba  SQLSoldier You've spent all night Valentines Day upgrading the SQL Servers and forgot to tell your wife you'd be working late. #youmightbeadba  SQLSoldier You're flattered when someone calls you a geek. #youmightbeadba  SQLSoldier @llangit @mrdenny it's 11pm on a holiday weekend, & your reading stupid jokes on Twitter then #youmightbeadba  SQLSoldier Your manager borrows lunch money from you because your salary is 30% higher than his. #youmightbeaDBA  SQLSoldier You think "intellisense" is a double negative because it's not intelligent nor makes sense. #youmightbeaDBA  SQLSoldier 75% of the emails you receive at home have the phrase "now following you on Twitter!" in the subject line. #youmightbeaDBA  SQLSoldier You petition Ken Burns to remake Office Space because it should have been 18 hours long. #youmightbeaDBA  SQLSoldier You select a candidate for a Jr DBA position because his resume said he's willing to get your coffee. #youmightbeaDBA  SQLSoldier Somebody misquotes @PaulRandall and you call him on your cell to verify. #youmightbeaDBA  SQLSoldier You wish the elevator in your building was slower because it's the last time you'll be left alone all day. #youmightbeaDBA  SQLSoldier The developers sacrifice small animals before giving you their code for review. #youmightbeaDBA  SQLSoldier Developers bring you coffee and a BLT when you review their code. #youmightbeaDBA #IWish  SQLSoldier You can get out of any family get-together by saying you have to work and nobody questions it. #youmightbeaDBA  SQLSoldier You've requested a HP Superdome for you "test" box. #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier A dev. asks if you've heard about some great new feature in SQL and you show the 16 blog posts you wrote on it ... last year #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA #TrueStory  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA  SQLSoldier You use a cell phone service coverage map to plan your next vacation. #youmightbeaDBA  SQLSoldier You come in to work at 7 AM because it gives you at least 3 hours without any developers around. #youmightbeaDBA  SQLSoldier You figure out a way to make take your wife on a cruise and deduct it as a business expense. #youmightbeaDBA #sqlcruise  SQLSoldier You name your cat SQLDog because the name @SQLCat was already taken. #youmightbeaDBA  SQLSoldier You rate your blog posts based on the number of retweets you get. #youmightbeaDBA  SQLSoldier You disable random logins just to mess with people. #youmightbeaDBA  SQLSoldier You fall for the pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your think the 4 food groups are coffee, bacon, fast food, and Mountain Dew. #youmightbeaDBA  SQLSoldier You tell someone your job title and they ask "What?" You describe it and they ask "What?". So you say "computer geek". #youmightbeaDBA  SQLSoldier The #1 referrer to your blog is Twitter.com. #youmightbeaDBA  SQLSoldier Your idea of a good time on a Saturday involves free training. #youmightbeaDBA #sqlsat43  SQLSoldier You write a book that all of your co-workers have and none have read it. #youmightbeaDBA  SQLSoldier You write a book that sells a couple thousand copies and is heralded a best seller. #youmightbeaDBA  SQLSoldier No matter how sick you are, you go to work if it's time to pass the pager on to the next guy. #youmightbeaDBA #TrueStory  SQLSoldier You go out on the town, and strangers walk up to you and say, "Hey you're that SQL guy" #youmightbeaDBA #TrueStory  SQLSoldier Your wife asks you to fix something, and you request a downtime window. #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier Your best pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You list TSQL as your native language on the 2010 census. #youmightbeaDBA  SQLSoldier Starbucks' stock price drops every time you go on vacation. #youmightbeaDBA  SQLSoldier You're happy when the web master says that the website is down. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You get mad when someone calls your car a "heap" because you've always considered it to be a "clustered index". #youmightbeaDBA  SQLSoldier Your blog has more hits than your company's website. #youmightbeaDBA  SQLSoldier You systematically remove the asterisk key from all keyboards in the company except yours. #youmightbeaDBA  SQLSoldier When asked if you recycle, you reply that you run sp_cycle_errorlog every night at midnight #youmightbeaDBA  SQLSoldier You wouldn't allow someone named @AdamMachanic to work on your car. #youmightbeaDBA  SQLSoldier You switch offices every 3 days to avoid developers #youmightbeaDBA  SQLSoldier PSS has your number on speed dial. #youmightbeaDBA  SQLSoldier You frown when you they tell Neo that he's going to the Oracle #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  Tarwn A pleasurable saturday means spending the day learning more about what you already do the rest of the week #youmightbeaDBA ...oh, wait...  thelostforum For great justice; all our base are belong to YOU !! #youmightbeadba  thelostforum @SQLSoldier: You need a witness to use a mirror #youmightbeaDBA ;)  TimCost you capitalize key words. always. everywhere. you can't help it, usually don't even notice. #youmightbeaDBA  Toshana Your the only one in your company not impressed with the developers new application. #youmightbeaDBA  venzann Coming soon from a (respected) book publisher - @buckwoody's #youmightbeaDBA  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann I love the #youmightbeaDBA tag. Found at least 6 new DBAs to follow..  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann You use #sqlhelp as a primary resource during troubleshooting #youmightbeaDBA  venzann You insist on stricter password security for your sql servers than you implement on your own laptop #youmightbeaDBA  WesBrownSQL @buckwoody you are up so late the only tweets you see are from @buckwoody #youmightbeaDBA  WesBrownSQL @SQLSoldier you are upgrading all your 2005 prod servers to 2008 R2 on a three day weekend... #youmightbeaDBA  zippy1981 #youmightbeaDBA if everytime you do something with #mongodb you think of the Vulcan proverb "only Nixon could go to China."  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Configuring a PIX 506e for Asterisk

    - by orthogonal3
    Hi all! I'm having problems configuring a old Cisco PIX running 6.3 and wondered if anyone can lend a hand? Simply put I have a PIX 506e that I want to put in my VoIP data path. I can't update it and getting a compat version of Java for that version of PIX is tough so I can't log onto the web interface. The PIX straddles two networks..... 192.168.5.0 on the inside, ...50.0 on the outside both net masks are 255.255.255.0 I have a local Asterisk server cluster with a single service IP (<local asterisk>) SIP is on UDP 5060 and RTP (for the voip data) is on UDP 18000-18999 I know thats a big range but hey may as well. I need the 192.168.5.0 net to have web and ftp access for updates and the like. DHCP, DNS and NTP is already provided on that network so I don't need external DNS access. So I think I want the following rules: SIP or RTP from <my itsp> arriving at <outside voip ip> NATed to <local asterisk> SIP or RTP able to do the reverse route (should be covered by high sec - low sec??) HTTP and FTP access outbound for software update for the servers etc I have the following config at the minute - and I think I'm almost there (I hope)... interface ethernet0 auto interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 enable password wouldyouliketobeapeppertoo encrypted passwd wouldyouliketobeapeppertoo encrypted hostname afirewall domain-name adomain fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 access-list acl_ping permit icmp any any access-list voip permit ip host <my itsp> host <local asterisk> mtu outside 1500 mtu inside 1500 ip address outside <outside pix ip> 255.255.255.0 ip address inside <inside pix ip> 255.255.255.0 arp timeout 14400 global (outside) 1 <outside generic ip> nat (inside) 1 192.168.5.0 255.255.255.0 0 0 static (inside,outside) <outside voip ip> <local asterisk> netmask 255.255.255.255 0 0 static (outside,inside) <local asterisk> <outside voip ip> netmask 255.255.255.255 0 0 access-group acl_ping in interface outside access-group acl_ping in interface inside route outside 0.0.0.0 0.0.0.0 <my next hop router> 1 route outside <my itsp> 255.255.255.255 <my next hop router> 1 I think I just need a hand with the access-lists and NAT/static rules. Would anyone be able to help as I've RTFM'd the Cisco docs a few times and they're heavy. Wishing I'd completed my CCNA now! Thanks all for any help, Phil

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • Can't obtain reference to EKReminder array retrieved from fetchRemindersMatchingPredicate

    - by Scionwest
    When I create an NSPredicate via EKEventStore predicateForRemindersInCalendars; and pass it to EKEventStore fetchRemindersMatchingPredicate:completion:^ I can loop through the reminders array provided by the completion code block, but when I try to store a reference to the reminders array, or create a copy of the array into a local variable or instance variable, both array's remain empty. The reminders array is never copied to them. This is the method I am using, in the method, I create a predicate, pass it to the event store and then loop through all of the reminders logging their title via NSLog. I can see the reminder titles during runtime thanks to NSLog, but the local arrayOfReminders object is empty. I also try to add each reminder into an instance variable of NSMutableArray, but once I leave the completion code block, the instance variable remains empty. Am I missing something here? Can someone please tell me why I can't grab a reference to all of the reminders for use through-out the app? I am not having any issues at all accessing and storing EKEvents, but for some reason I can't do it with EKReminders. - (void)findAllReminders { NSPredicate *predicate = [self.eventStore predicateForRemindersInCalendars:nil]; __block NSArray *arrayOfReminders = [[NSArray alloc] init]; [self.eventStore fetchRemindersMatchingPredicate:predicate completion:^(NSArray *reminders) { arrayOfReminders = [reminders copy]; //Does not work. for (EKReminder *reminder in reminders) { [self.remindersForTheDay addObject:reminder]; NSLog(@"%@", reminder.title); } }]; //Always = 0; if ([self.remindersForTheDay count]) { NSLog(@"Instance Variable has reminders!"); } //Always = 0; if ([arrayOfReminders count]) { NSLog(@"Local Variable has reminders!"); } } The eventStore getter is where I perform my instantiation and get access to the event store. - (EKEventStore *)eventStore { if (!_eventStore) { _eventStore = [[EKEventStore alloc] init]; //respondsToSelector indicates iOS 6 support. if ([_eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { //Request access to user calendar [_eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore calendar granted."); } else { NSLog(@"Access to EventStore calendar denied."); } }]; //Request access to user Reminders [_eventStore requestAccessToEntityType:EKEntityTypeReminder completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore Reminders granted."); } else { NSLog(@"Access to EventStore Reminders denied."); } }]; } else { //iOS 5.x and lower support if Selector is not supported NSLog(@"iOS 5.x < Access to EventStore calendar granted."); } for (EKCalendar *cal in self.calendars) { NSLog(@"Calendar found: %@", cal.title); } [_eventStore reset]; } return _eventStore; } Lastly, just to show that I am initializing my remindersForTheDay instance variable using lazy instantiation. - (NSMutableArray *)remindersForTheDay { if (!_remindersForTheDay) _remindersForTheDay = [[NSMutableArray alloc] init]; return _remindersForTheDay; } I've read through the Apple documentation and it doesn't provide any explanation that I can find to answer this. I read through the Blocks Programming docs and it states that you can access local and instance variables without issues from within a block, but for some reason, the above code does not work. Any help would be greatly appreciated, I've scoured Google for answers but have yet to get this figured out. Thanks everyone! Johnathon.

    Read the article

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Move a sphere along the swipe?

    - by gameOne
    I am trying to get a sphere curl based on the swipe. I know this has been asked many times, but still it's yearning to be answered. I have managed to add force on the direction of the swipe and it works near perfect. I also have all the swipe positions stored in a list. Now I would like to know how can the curl be achieved. I believe the the curve in the swipe can be calculated by the Vector dot product If theta is 0, then there is no need to add the swipe. If it is not, then add the curl. Maybe this condition is redundant if I managed to find how to curl the sphere along the swipe position The code that adds the force to sphere based on the swipe direction is as below: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SwipeControl : MonoBehaviour { //First establish some variables private Vector3 fp; //First finger position private Vector3 lp; //Last finger position private Vector3 ip; //some intermediate finger position private float dragDistance; //Distance needed for a swipe to register public float power; private Vector3 footballPos; private bool canShoot = true; private float factor = 40f; private List<Vector3> touchPositions = new List<Vector3>(); void Start(){ dragDistance = Screen.height*20/100; Physics.gravity = new Vector3(0, -20, 0); footballPos = transform.position; } // Update is called once per frame void Update() { //Examine the touch inputs foreach (Touch touch in Input.touches) { /*if (touch.phase == TouchPhase.Began) { fp = touch.position; lp = touch.position; }*/ if (touch.phase == TouchPhase.Moved) { touchPositions.Add(touch.position); } if (touch.phase == TouchPhase.Ended) { fp = touchPositions[0]; lp = touchPositions[touchPositions.Count-1]; ip = touchPositions[touchPositions.Count/2]; //First check if it's actually a drag if (Mathf.Abs(lp.x - fp.x) > dragDistance || Mathf.Abs(lp.y - fp.y) > dragDistance) { //It's a drag //Now check what direction the drag was //First check which axis if (Mathf.Abs(lp.x - fp.x) > Mathf.Abs(lp.y - fp.y)) { //If the horizontal movement is greater than the vertical movement... if ((lp.x>fp.x) && canShoot) //If the movement was to the right) { //Right move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("right "+(lp.x-fp.x));//MOVE RIGHT CODE HERE canShoot = false; //rigidbody.AddForce((new Vector3((lp.x-fp.x)/30,10,16))*power); StartCoroutine(ReturnBall()); } else { //Left move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("left "+(lp.x-fp.x));//MOVE LEFT CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } } else { //the vertical movement is greater than the horizontal movement if (lp.y>fp.y) //If the movement was up { //Up move float y = (lp.y-fp.y)/Screen.height*factor; float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,y,16))*power); Debug.Log("up "+(lp.x-fp.x));//MOVE UP CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } else { //Down move Debug.Log("down "+lp+" "+fp);//MOVE DOWN CODE HERE } } } else { //It's a tap Debug.Log("none");//TAP CODE HERE } } } } IEnumerator ReturnBall() { yield return new WaitForSeconds(5.0f); rigidbody.velocity = Vector3.zero; rigidbody.angularVelocity = Vector3.zero; transform.position = footballPos; canShoot =true; isKicked = false; } }

    Read the article

  • vmware player xorg driver broke after ubuntu 12.04 LTS automatic system update on 06/07/2014

    - by user291828
    I am running Ubuntu 12.04 LTS as a virtual machine on a Windows 7 host using vmware player 6.0.2. On Sat. 06/07/2014, Ubuntu performed an automatic system update as it does regularly, however after reboot, the display driver seems to be broken which causes the screen to split into many identical panels. This pretty much rendered the VM unusable. Below is the log entry for that update from /var/log/apt/history.log It appears that at least one of these updates have caused this problem. The exact same problem has been reported on the vmware forum as well, see https://communities.vmware.com/message/2388776#2388776 So far I have not been able so find a solution. Content in /var/log/apt/history.log Start-Date: 2014-06-07 15:04:46 Commandline: aptdaemon role='role-commit-packages' sender=':1.65' Install: linux-headers-3.2.0-64:amd64 (3.2.0-64.97), linux-image-3.2.0-64-generic:amd64 (3.2.0-64.97), linux-headers-3.2.0-64-generic:amd64 (3.2.0-64.97) Upgrade: iproute:amd64 (20111117-1ubuntu2.1, 20111117-1ubuntu2.3), libknewstuff3-4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeclarative5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukquery4a:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls-openssl27:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libthreadweaver4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdecore5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libktexteditor4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkmediaplayer4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkrosscore4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls26:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libgnutls26:i386 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libsolid4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomuk4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdnssd4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkparts4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdoctools:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl-dev:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl-doc:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libkidletime4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-headers-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), linux-image-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkcmutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkfile4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkpty4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkntlm4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libplasma3:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkemoticons4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs-bin:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdewebkit5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsembed4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkio5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsapi4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), openssl:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), kdelibs5-data:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-libc-dev:amd64 (3.2.0-63.95, 3.2.0-64.97), libkde3support4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libknotifyconfig4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs5-plugins:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkhtml5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeui5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdesu5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl1.0.0:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl1.0.0:i386 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14) End-Date: 2014-06-07 15:09:08

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Error 0x800f0922 installing .NET 3.5 on Windows 8

    - by Benjamin Nolan
    I'm trying to install .NET 3.5 on my Windows 8 box and it keeps throwing Error 0x800f0922 at me. From what I've read on answers.microsoft.com and StackOverflow I gather the easiest way to fix this is to perform a system refresh, however this will remove all software I've installed from discs. I've just moved house, so I'd rather not do that as I don't know where all the installation media actually are for a lot of my software, so if possible I'd prefer to track down where the problem is actually occurring. (Also, I have a LOT of software installed. It'd take me a long time to reinstall it all, and I unfortunately haven't got that time.) The on-demand error screen sends me to KB2734782 (can't link it as I'm <10 rep), which doesn't help much. When I run this DISM line from the StackOverflow post: Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess I get the following output on the terminal: Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All rights reserved. C:\Windows\system32>Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess Deployment Image Servicing and Management tool Version: 6.2.9200.16384 Image Version: 6.2.9200.16384 Enabling feature(s) [==========================100.0%==========================] Error: 0x800f0922 DISM failed. No operation was performed. For more information, review the log file. The DISM log file can be found at C:\Windows\Logs\DISM\dism.log C:\Windows\system32> Incidentally, it jumps straight from 0 to 100% and then sits on that line for about 5 minutes before the error line occurs. dism.log contains the following lines around that time: (Link to full logs is at bottom of post) 2013-07-02 00:56:58, Info DISM DISM.EXE: Succesfully registered commands for the provider: Edition Manager. 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Getting Provider DISM Package Manager - CDISMProviderStore::GetProvider 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Provider has previously been initialized. Returning the existing instance. - CDISMProviderStore::Internal_GetProvider 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Processing the top level command token(enable-feature). - CPackageManagerCLIHandler::Private_ValidateCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Attempting to route to appropriate command handler. - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Routing the command... - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 Initiating Changes on Package with values: 5, 7 - CDISMPackage::Internal_ChangePackageState 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 CBS session options=0x20100! - CDISMPackageManager::Internal_Finalize 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=2420 Error in operation: (null) (CBS HRESULT=0x800f0922) - CCbsConUIHandler::Error 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed finalizing changes. - CDISMPackageManager::Internal_Finalize(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed processing package changes with session options - CDISMPackageManager::ProcessChangesWithOptions(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed ProcessChanges. - CPackageManagerCLIHandler::Private_ProcessFeatureChange(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed while processing command enable-feature. - CPackageManagerCLIHandler::ExecuteCmdLine(hr:0x800f0922) 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=5780 Further logs for online package and feature related operations can be found at %WINDIR%\logs\CBS\cbs.log - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 01:00:27, Error DISM DISM.EXE: DISM Package Manager processed the command line but failed. HRESULT=800F0922 cbs.log has the following chunks around then which could be relevant: 2013-07-02 00:55:06, Info CBS Exec: This is a PSF Package. Job has been saved and we are returning to client. 2013-07-02 00:55:06, Info CSI 0000042d@2013/7/1:23:55:06.203 CSI Transaction @0xe2f5e59500 destroyed 2013-07-02 00:55:06, Info CBS Exec: DPX job state saved for one or more packages, aborting the staging and install of execution. 2013-07-02 00:55:06, Info CSI 0000042e@2013/7/1:23:55:06.207 CSI Transaction @0xe2f5e58480 destroyed 2013-07-02 00:55:06, Info CBS Perf: Stage chain complete. 2013-07-02 00:55:06, Info CBS Failed to stage execution chain. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Failed to process single phase execution. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS WER: Failure is not worth reporting [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Reboot mark cleared and further down: 2013-07-02 00:59:19, Info CSI 000004e6 Begin executing advanced installer phase 38 (0x00000026) index 253 (0x00000000000000fd) (sequence 289) Old component: [l:0]"" New component: [ml:306{153},l:304{152}]"NetFx35CDF-CDF_GenericCommands, Culture=neutral, Version=6.2.9200.16384, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=x86, versionScope=NonSxS" Install mode: install Installer ID: {81a34a10-4256-436a-89d6-794b97ca407c} Installer name: [15]"Generic Command" 2013-07-02 00:59:19, Info CSI 000004e7 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:19fc6600b776ce01c91f0000fc07a816} pathid: {l:16 b:19fc6600b776ce01ca1f0000fc07a816} path: [l:214{107}]"\SystemRoot\WinSxS\x86_netfx35cdf-cdf_genericcommands_31bf3856ad364e35_6.2.9200.16384_none_0cec490be12fb858" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e8 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:27236700b776ce01cb1f0000fc07a816} pathid: {l:16 b:27236700b776ce01cc1f0000fc07a816} path: [l:210{105}]"\SystemRoot\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e9 Calling generic command executable (sequence 1): [122]"C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" CmdLine: [139]""C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" /c /b /v /m /i" 2013-07-02 00:59:20, Info CSI 000004ea Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:bd790401b776ce01cd1f0000fc07a816} pathid: {l:16 b:bd790401b776ce01ce1f0000fc07a816} path: [l:234{117}]"\SystemRoot\WinSxS\x86_microsoft.windows.s..ation.badcomponents_31bf3856ad364e35_6.2.9200.16384_none_353ccb4c94858655" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:20, Info CSI 000004eb Creating NT transaction (seq 27), objectname [6]"(null)" 2013-07-02 00:59:20, Info CSI 000004ec Created NT transaction (seq 27) result 0x00000000, handle @0x24b8 2013-07-02 00:59:20, Info CSI 000004ed@2013/7/1:23:59:20.933 Beginning NT transaction commit... 2013-07-02 00:59:22, Info CSI 000004ee@2013/7/1:23:59:22.065 CSI perf trace: CSIPERF:TXCOMMIT;1387723 2013-07-02 00:59:22, Error CSI 000004ef (F) Done with generic command 1; CreateProcess returned 0, CPAW returned S_OK Process exit code 255 (0x000000ff) resulted in success? FALSE Process output: [l:28479 [4096]"DDSet_Entry: WFServicesReg.exe DDSet_Status: CFxInstaller::CopyConfigFilesToTemp is64bit=0 DDSet_Status: CFileHelper::CopyConfigFilesToTempLocation DDSet_Status: CFxInstaller::SetupBaseComponents isInstall=1 DDSet_Status: CFxInstaller::SetupBaseComponents Calling SetupExtensions. isInstall=1 (0x000000FF -- The extended attributes are inconsistent. ??) And a bit further down: 2013-07-02 00:59:22, Error [0x018007] CSI 000004f0 (F) Failed execution of queue item Installer: Generic Command ({81a34a10-4256-436a-89d6-794b97ca407c}) with HRESULT HRESULT_FROM_WIN32(14109). Failure will not be ignored: A rollback will be initiated after all the operations in the installer queue are completed; installer is reliable (2)[gle=0x80004005] [...snip...] 2013-07-02 00:59:22, Info CBS Not able to add pending.xml.bad to Windows Error Report. [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2013-07-02 00:59:28, Info CSI 000004f1@2013/7/1:23:59:28.467 CSI Advanced installer perf trace: CSIPERF:AIDONE;{81a34a10-4256-436a-89d6-794b97ca407c};NetFx35CDF-CDF_GenericCommands, Version = 6.2.9200.16384, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral;10609242us 2013-07-02 00:59:28, Info CSI 000004f2 End executing advanced installer (sequence 289) Completion status: HRESULT_FROM_WIN32(ERROR_ADVANCED_INSTALLER_FAILED) [...snip...] 2013-07-02 01:00:26, Info CBS Exec: Cancelled pending transactions after rollback. [HRESULT = 0x00000000 - S_OK] 2013-07-02 01:00:26, Error CBS Exec: An error occurred while committing the transaction, the transaction could not be rolled back. [HRESULT = 0x800f0922 - CBS_E_INSTALLERS_FAILED] The full DISM and CBS logs are at http://ben.mu/files/dotnet35_dism_cbs.zip as the CBS log is nearly 167MB uncompressed. o.o dism.log gives the timeframe of where its errors occur--00:56:20ish to 01:00:22. Does anyone have any ideas what's actually causing the installation to fail, and if so how I can fix it? Please don't just say "Refresh the OS". :)

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >