Search Results

Search found 5682 results on 228 pages for 'lord of scripts'.

Page 5/228 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • xfce4 - 'Run Program' dialogue box does not appear when double clicking script

    - by Ron Paulfan
    Ubuntu 12.04 Xfce4 On my previous Ubuntu distro, when I double clicked scripts I got a little dialogue window that asked me if I wanted to run the script as a program or run it in terminal. Similar to this window: Since upgrading, I have never seen that window. I have ensured that the option to 'Allow executing as a program is enabled' and the script works if I run it through terminal. I simply just don't get the prompt. Any ideas?

    Read the article

  • perl scripts stdin/pipe reading problem [closed]

    - by user4541
    I have 2 scripts for a task. The 1st outputs lines of data (terminated with RT/LF) to STDOUT now and then. The 2nd keeps reading data from STDIN for further processing in the following way: use strict; my $dataline; while(1) { $dtaline = ""; $dataline = ; until( $dataline ne "") { sleep(1); $dataline = ; } #further processing with a non-empty data line follows # } print "quitting...\n"; I redirect the output from the 1st to the 2nd using pipe as following: perl scrt1 |perl scpt2. But the problem I'm having with these 2 scpts is that it looks like that the 2nd scpt keeps getting the initial load of lines of data from the 1st scpt if there's no data anymore. Wonder if anybody having similar issues can kindly help a bit? Thanks.

    Read the article

  • Kernel Panic: line 61: can't open /scripts/functions

    - by Pavlos G.
    I'm facing a problem with all the kernels installed at my system (Ubuntu 10.10 64-bit). Installed kernel versions: 2.6.32-21 up to 2.6.35.23. The booting halted with the following error: init: .: line 61: can't open '/scripts/functions' Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted Only the first one (2.6.32-21) was working up until know. I asked for help at ubuntuforums.org and i was told to check if there's any problem regarding my graphics card (ATI Radeon). I uninstalled all the ATI-related packages as well as all the unecessary xserver-xorg-video-* drivers that were installed. I then rebooted and from then on ALL of the kernels halt with the same error (i.e. it didn't fix the problematic kernels, it just broke the only one that was working...) Any ideas on what i should try next? Thanks in advance. Pavlos.

    Read the article

  • Unity3d: Box collider attached to animated FBX models through scripts at run-time have wrong dimension

    - by Heisenbug
    I have several scripts attached to static and non static models of my scene. All models are instantiated at run-time (and must be instantiated at run-time because I'm procedural building the scene). I'd like to add a BoxCollider or SphereCollider to my FBX models at runtime. With non animated models it works simply requiring BoxCollider component from the script attached to my GameObject. BoxCollider is created of the right dimension. Something like: [RequireComponent(typeof(BoxCollider))] public class AScript: MonoBehavior { } If I do the same thing with animated models, BoxCollider are created of the wrong dimension. For example if attach the script above to penelopeFBX model of the standard asset, BoxCollider is created smaller than the mesh itself. How can I solve this?

    Read the article

  • Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    This document contains step-by-step instructions for installing and testing the Microsoft Business Intelligence infrastructure based on SQL Server 2012 and SharePoint 2010, focused on SQL Server 2012 Reporting Services with Power View. This document describes how to completely install the following scenarios: a standalone instance of SharePoint and Power View with all required components; a new SharePoint farm with the Power View infrastructure; a server with the Power View infrastructure joined to an existing SharePoint farm; installation on a separate computer of client tools; installation of a tabular instance of Analysis Services on a separate instance; and configuration of single sign-on access for double-hop scenarios with and without Kerberos. Scripts are provided for all/most scenarios.

    Read the article

  • Python simulation-scripts architecture

    - by Beastcraft
    Situation: I've some scripts that simulate user-activity on desktop. Therefore I've defined a few cases (workflows) and implemented them in Python. I've also written some classes for interacting with the users' software (e.g. web browser etc.). Problem: I'm a total beginner in software design / architecture (coding isn't a problem). How could I structure what I described above? Providing a library which contains all the workflows as functions, or a separate class/module etc. for each workflow? I want to keep the the workflows simple. The complexity should be hidden in the classes for interacting with the users' software. Are there any papers / books I could read about this, or could you provide some tips? Kind regards, B

    Read the article

  • Increase Servers folder depth limit for running scripts?

    - by MeltingDog
    I have a CMS on my site that utilises TinyMCE (the WYSIWYG text editor). The issue is that TinyMCE cannot browse for files (example: images) on the web server. I get the error: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. I have been told this may be occurring because the server is configured to limit the folder depth for running scripts. Unfortunately I am primarily a front end developer so I am not really sure how to go about changing/viewing this. I have access to WHM and cPanel. Does anyone know how to adjust this?

    Read the article

  • Writing scripts for Visual Studio project

    - by oillio
    What is the best way to write and run small scripts and tasks that are specific to a particular .Net project? Such things as configuring a database or confirming proper connections to servers. In Ruby, I would build rake tasks for this sort of thing. I am currently using unit tests for these tasks as they are easy to run within VS and they have access to all the necessary libraries and project code. However, this is not really their intended purpose and, with the dropping of Test Lists in VS 2012, it does not work nearly as well as it used to. Is there a better solution than writing a console project to handle these little code snippets I need to run periodically?

    Read the article

  • CLI Shared Hosting Management (scripts to manage web users and hosts) [on hold]

    - by aularon
    I am currently administrating two servers: first one has no control panel, I am creating directory structure, setting permission, configuring different aspects (users/php-fpm pools, nginx hosts..) for each of sites. With more clients, I sat up ISPConfig on my second server, everything is easily handled by ISPConfig Web Interface. However, I am searching for a CLI based solution, i.e. a set of scripts to create and manage hosts. Basically, a method to control ISPConfig from the command line (so I can use it over SSH) would be a good start. Does anybody know of such effort? I searched but all I got was web based solutions. Thanks.

    Read the article

  • Licensing my own dh_* scripts

    - by avnik
    I wrote little helper script, for my own buildsystem. This script uses debhelper's Dh_lib to inject pre/post install fragments. use strict; use Debian::Debhelper::Dh_Lib; foreach my $package (@{$dh{DOPACKAGES}}) { autoscript($package, "postinst", "postinst-rock2deb"); autoscript($package, "postrm", "postrm-rock2deb"); } Should it be GPL'ed, because it use GPL'ed Dh_lib, or it uncopyrightable, because no other way to do it? Other scripts in my buildsystem are MIT/X licensed, and I prefer to stay with MIT/X when possible. (question moved from stackoverflow as suggested by few SG members)

    Read the article

  • PHP scripts owned by www-data

    - by matnagel
    I am always running php scripts on a dedicated server as user "webroot". It would be easier for coding and administration if the scripts were owned by www-data, the apache2 user. Also feels more simple and clean. There is no ftp on this box and there are no other users or sites. Why not have the php scripts owned by www-data? If there is anything against it, what is the worst that can happen?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Where should custom vmware-tools scripts go

    - by Cylindric
    I have installed the VMWare Tools into a test Ubuntu guest, and it's created the standard scripts as expected: poweroff-vm-default poweron-vm-default resume-vm-default suspend-vm-default I add some custom actions to the scripts, but it says in the top of the file ########################################################################## # DO NOT modify this file directly as it will be overwritten the next # time the VMware Tools are installed. ########################################################################## So where should the custom scripts go, if I'm not supposed to modify these ones? scriptsdir="`dirname $0`/scripts/`basename $0`.d" if [ -d "$scriptsdir" ]; then for scriptfile in "$scriptsdir"/*; do [ -x "$scriptfile" ] && "$scriptfile" poweron-vm done fi

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating New Scripts Dynamically in Lua

    - by bazola
    Right now this is just a crazy idea that I had, but I was able to implement the code and get it working properly. I am not entirely sure of what the use cases would be just yet. What this code does is create a new Lua script file in the project directory. The ScriptWriter takes as arguments the file name, a table containing any arguments that the script should take when created, and a table containing any instance variables to create by default. My plan is to extend this code to create new functions based on inputs sent in during its creation as well. What makes this cool is that the new file is both generated and loaded dynamically on the fly. Theoretically you could get this code to generate and load any script imaginable. One use case I can think of is an AI that creates scripts to map out it's functions, and creates new scripts for new situations or environments. At this point, this is all theoretical, though. Here is the test code that is creating the new script and then immediately loading it and calling functions from it: function Card:doScriptWriterThing() local scriptName = "ScriptIAmMaking" local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() local loadedScript = require (scriptName) local scriptInstance = loadedScript:new("sayThis") print(scriptInstance:get_name()) --will print test print(scriptInstance:get_one()) -- will print 1 scriptInstance:set_one(10000) print(scriptInstance:get_one()) -- will print 10000 print(scriptInstance:get_argumentName()) -- will print sayThis scriptInstance:set_argumentName("saySomethingElse") print(scriptInstance:get_argumentName()) --will print saySomethingElse end Here is ScriptWriter.lua local ScriptWriter = {} local twoSpaceIndent = " " local equalsWithSpaces = " = " local newLine = "\n" --scriptNameToCreate must be a string --argumentsForNew and instanceVariablesToCreate must be tables and not nil function ScriptWriter:new(scriptNameToCreate, argumentsForNew, instanceVariablesToCreate) local instance = setmetatable({}, { __index = self }) instance.name = scriptNameToCreate instance.newArguments = argumentsForNew instance.instanceVariables = instanceVariablesToCreate instance.stringList = {} return instance end function ScriptWriter:makeFileForLoadedSettings() self:buildInstanceMetatable() self:buildInstanceCreationMethod() self:buildSettersAndGetters() self:buildReturn() self:writeStringsToFile() end --very first line of any script that will have instances function ScriptWriter:buildInstanceMetatable() table.insert(self.stringList, "local " .. self.name .. " = {}" .. newLine) table.insert(self.stringList, newLine) end --every script made this way needs a new method to create its instances function ScriptWriter:buildInstanceCreationMethod() --new() function declaration table.insert(self.stringList, ("function " .. self.name .. ":new(")) self:buildNewArguments() table.insert(self.stringList, ")" .. newLine) --first line inside :new() function table.insert(self.stringList, twoSpaceIndent .. "local instance = setmetatable({}, { __index = self })" .. newLine) --add designated arguments inside :new() self:buildNewArgumentVariables() --create the instance variables with the loaded values for key,value in pairs(self.instanceVariables) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. key .. equalsWithSpaces .. value .. newLine) end --close the :new() function table.insert(self.stringList, twoSpaceIndent .. "return instance" .. newLine) table.insert(self.stringList, "end" .. newLine) table.insert(self.stringList, newLine) end function ScriptWriter:buildNewArguments() --if there are arguments for :new(), add them for key,value in ipairs(self.newArguments) do table.insert(self.stringList, value) table.insert(self.stringList, ", ") end if next(self.newArguments) ~= nil then --makes sure the table is not empty first table.remove(self.stringList) --remove the very last element, which will be the extra ", " end end function ScriptWriter:buildNewArgumentVariables() --add the designated arguments to :new() for key, value in ipairs(self.newArguments) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. value .. equalsWithSpaces .. value .. newLine) end end --the instance variables need separate code because their names have to be the key and not the argument name function ScriptWriter:buildSettersAndGetters() for key,value in ipairs(self.newArguments) do self:buildArgumentSetter(value) self:buildArgumentGetter(value) table.insert(self.stringList, newLine) end for key,value in pairs(self.instanceVariables) do self:buildInstanceVariableSetter(key, value) self:buildInstanceVariableGetter(key, value) table.insert(self.stringList, newLine) end end --code for arguments passed in function ScriptWriter:buildArgumentSetter(variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. variable .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. variable .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildArgumentGetter(variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. variable .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. variable .. newLine) table.insert(self.stringList, "end" .. newLine) end --code for instance variable values passed in function ScriptWriter:buildInstanceVariableSetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. key .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. key .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildInstanceVariableGetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. key .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. key .. newLine) table.insert(self.stringList, "end" .. newLine) end --last line of any script that will have instances function ScriptWriter:buildReturn() table.insert(self.stringList, "return " .. self.name) end function ScriptWriter:writeStringsToFile() local fileName = (self.name .. ".lua") file = io.open(fileName, 'w') for key,value in ipairs(self.stringList) do file:write(value) end file:close() end return ScriptWriter And here is what the code provided will generate: local ScriptIAmMaking = {} function ScriptIAmMaking:new(argumentName) local instance = setmetatable({}, { __index = self }) instance.argumentName = argumentName instance.name = 'test' instance.one = 1 return instance end function ScriptIAmMaking:set_argumentName(newValue) self.argumentName = newValue end function ScriptIAmMaking:get_argumentName() return self.argumentName end function ScriptIAmMaking:set_name(newValue) self.name = newValue end function ScriptIAmMaking:get_name() return self.name end function ScriptIAmMaking:set_one(newValue) self.one = newValue end function ScriptIAmMaking:get_one() return self.one end return ScriptIAmMaking All of this is generated with these calls: local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() I am not sure if I am correct that this could be useful in certain situations. What I am looking for is feedback on the readability of the code, and following Lua best practices. I would also love to hear whether this approach is a valid one, and whether the way that I have done things will be extensible.

    Read the article

  • How should I set up protection for the database against sql injection when all the php scripts are flawed?

    - by Tchalvak
    I've inherited a php web app that is very insecure, with a history of sql injection. I can't fix the scripts immediately, I rather need them to be running to have the website running, and there are too many php scripts to deal with from the php end first. I do, however, have full control over the server and the software on the server, including full control over the mysql database and it's users. Let's estimate it at something like 300 scripts overall, 40 semi-private scripts, and 20 private/secure scripts. So my question is how best to go about securing the data, with the implicit assumption that sql injection from the php side (e.g. somewhere in that list of 300 scripts) is inevitable? My first-draft plan is to create multiple tiers of different permissioned users in the mysql database. In this way I can secure the data & scripts in most need of securing first ("private/secure" category), then the second tier of database tables & scripts ("semi-private"), and finally deal with the security of the rest of the php app overall (with the result of finally securing the database tables that essentially deal with "public" information, e.g. stuff that even just viewing the homepage requires). So, 3 database users (public, semi-private, and secure), with a different user connecting for each of three different groups of scripts (the secure scripts, the semi-private scripts, and the public scripts). In this way, I can prevent all access to "secure" from "public" or from "semi-private", and to "semi-private" from "public". Are there other alternatives that I should look into? If a tiered access system is the way to go, what approaches are best?

    Read the article

  • Parsing scripts that use curly braces

    - by Keikoku
    To get an idea of what I'm doing, I am writing a python parser that will parse directx .x text files. The problem I have deals with how the files are formatted. Although I'm writing it in python, I'm looking for general algorithms for dealing with this sort of parsing. .x files define data using templates. The format of a template is template_name { [some_data] } The goal I have is to parse the file line-by-line and whenever I come across a template, I will deal with it accordingly. My initial approach was to check if a line contains an opening or closing brace. If it's an open brace, then I will check what the template name is. Now the catch here is that the open brace doesn't have to occur on the same line as the template name. It could just as well be template_name { [some_data] } So if I were to use my "open brace exists" criteria, it won't work for any files that use the latter format. A lot of languages also use curly braces (though I'm not sure when people would be parsing the scripts themselves), so I was wondering if anyone knows how to accurately get the template name (or in some other languages, it could just as well be a function name, though there aren't any keywords to look for)

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • Executing AJAX scripts within an AJAX response

    - by KcYxA
    Hi there, I'm having trouble using AJAX page updates along with other AJAX scripts. During a normal page load, the AJAX scripts (picture scrolling and picture thumbnails) work fine. But when I update a page with AJAX, these scripts (usually loaded in the header of the initial page load) stop working. I am wondering if this is specific to these scripts and I need to look into them deeper to resolve/re-write or if I am missing something more generic in combining AJAX page updates with AJAX scripts the returned code relies on. Embedded javascript runs fine. Thanks for your ideas! PS: The scripts I am using are from www.dynamicdrive.com are: 1) stepcarousel (http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm) and 23) thumbnailviewer (http://www.dynamicdrive.com/dynamicindex4/thumbnail.htm) from the

    Read the article

  • SQL SERVER – Fix : Error : 8501 MSDTC on server is unavailable. Changed database context to publishe

    - by pinaldave
    During configuring replication on one of the server, I received following error. This is very common error and the solution of the same is even simpler. MSDTC on server is unavailable. Changed database context to publisherdatabase. (Microsoft SQL Server, Error: 8501) Solution: Enable “Distributed Transaction Coordinator” in SQL Server. Method 1: Click on Start–>Control Panel->Administrative Tools->Services Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Method 2: Type services.msc in the run command box Select “Services” manager; Hit Enter Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Replication

    Read the article

  • SQL SERVER – Generate Database Script for SQL Azure

    - by pinaldave
    When talking about SQL Azure the common complain I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate script which is compatible with SQL Azure. As above images are very clear I will not write more about them. SQL Azure does not support filegroups. Let us generate script for any table created on PRIMARY filegroup for standalong SQL Server and compare it with the script generated for SQL Azure. You can clearly see that there is no filegroup in the code generated for SQL Azure. Give it a try and please your comment here about what do you think about the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Azure

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative

    - by pinaldave
    Ranking has improvement considerably from SQL Server 2000 to SQL Server 2005/2008 to SQL Server 2011. Here is the blog article where I wrote about SQL Server 2005/2008 paging method SQL SERVER – 2005 T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table. One can achieve this using OVER clause and ROW_NUMBER() function. Now SQL Server 2011 has come up with the new Syntax for paging. Here is how one can easily achieve it. USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO I consider it good enhancement in terms of T-SQL. I am sure many developers are waiting for this feature for long time. We will consider performance different in future posts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Quick Look at SQL Server Configuration for Performance Indications

    - by pinaldave
    Earlier I wrote SQL SERVER – Beginning SQL Server: One Step at a Time – SQL Server Magazine. That was the first article on the series of my real world experience of Performance Tuning experience. I have written second part the same series over here. Read second part over here: Quick Look at SQL Server Configuration for Performance Indications. In this second part I talk about two types of my clients. 1) Those who want instant results 2) Those who want the right results It is really fun to work with both the clients. I talk about various configuration options which I look at when I try to give very early opinion about SQL Server Performance. There are various eight configurations, I give quick look and start talking about performance. Head over to original article over here: Quick Look at SQL Server Configuration for Performance Indications. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >