Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 173/245 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Why is Javascript used in MongoDB and CouchDB instead of other languages such as Java, C++?

    - by startup007
    I asked this question on SO but was suggested to try here. So here it goes: My understanding of Javascript so far has been that it is a client-side language that capture events and makes a web-page dynamic. But on reading the comparison between MongoDB and CouchDB I noticed that both are using Javascript. This makes me wonder the reason behind the choice of JavaScript over other conventional languages. I guess I am trying to understand the role of JavaScript and its advantages over other languages. Update: I am not asking about the languages / drivers supported by the two databases. The comparison says: Both CouchDB and MongoDB make use of Javascript. CouchDB uses Javascript extensively including in the building of views. MongoDB also supports running arbitrary javascript functions server-side and uses javascript for map/reduce operations. My lack of understanding pertains to why is Javascript being used at all for the backend work. Why is it preferred for building views in CouchDB, or for using map/reduce operations? Why C/C++ or Java were not used? What are the advantages in using Javascript for such back-end work?

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • Creating a backup - Rsync - Connection refused (111)

    - by pablofiumara
    I am trying to create a backup of my website for free. I just want to have a backup of my website, including not only all files and the configuration but also the databases. I mean, a full backup. If it can be done automatically, it would be better. I feel there are better ways than using the cpanel to achieve that (actually, I believe sometimes web hosters does not have any cpanel). I read the following on how to do it: Automatically mirror the entire contents and configuration of your main server to a secondary backup server on a completely separate network in a different data centre. Use RSync, FXP, cPanel voodoo, or whatever method you wish to automate syncing. That is why I installed Rsync Daemon which is an alternative to SSH for remote backups. I configured it but the test went wrong. The terminal is showing me this: pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share [sudo] password for pablofiumara: rsync: failed to connect to pablofiumara.com (50.87.147.75): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share failed to connect to 50.87.147.7 (50.87.147.7): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] What should I do? Is there a better or easier way to achieve what I wish (I mentioned this in the first paragraph)?

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • Essbase Excel Add in - S.o.D.

    - by THE
    #cross { font-size: 72pt; } sadly another long lasting friend is about to be buried in the wet, cold data void that holds past programs (... and AOL CDs). The Essbase Excel Add In is about to be de-continued (see  Doc ID 1466700.1) in January '13. The (already out) version 11.1.2.2.x of the Excel Add In must be considered the last release of this particular program (Unless the guys from Applied OLAP bring out their own version next to the openOffice Add In that they already sport). As expected, SmartView achieved parity in functionality with Release 11.1.2.1.102 and ever since then it was just a question of time when our old buddy would get the shoe. For all users out there like me that have known and worked with the Excel Add In for the last decade(s) this is a loss. SmartView may have functionality parity, and may altogether be the stronger, open technology - capable of Planning forms, connection to HFM etc. .But (from my personal point of view) it will not give the end user the same direct access to his databases, with nothing between him and his Essbase Server. Of course it was to be expected that only one of the two could survive and it was obvious that this would be SmartView, so this does not come as a surprise. Still.A minute for an old friend . . . . . . Thank you, and let us look forward! Unless you had other plans for the upcoming season, why not spend it investigating SmartView for your Essbase interaction needs. We hear that the days between Christmas and new year hold unlimited potential to test out new things. Or take it as a new year resolution: "I will switch to SmartView at the earliest possible moment".

    Read the article

  • PHP/MySQL Database application development tool

    - by RCH
    I am an amateur PHP coder, and have built a couple of dozen projects from scratch (including fairly simple e-commerce systems with user authentication, PayPal integration etc - all coded by hand from a clean page. Have also done a price comparison engine that takes data from multiple sites etc.). But I am no expert with OO and other such advanced techniques - I just have a fairly decent grasp of the basics of data processing, logic, functions and trying to optimize code as much as possible. I just want to make this clear so you have some idea of where I'm coming from. I have a couple of fairly large new projects on my plate for corporate clients - both require bespoke database-driven applications with complex relationships, many tables and lots of different front-end functions to manipulate that data for the internal staff in these companies. I figured building these systems from scratch would probably be a huge waste of time. Instead, there must be tools out there that will allow me to construct MySQL databases and build the pages with things like pagination, action buttons, table construction etc. Some kind of database abstraction layer, or system generator, if you will. What tool do you recommend for such a purpose for someone at my level? Open source would be great, but I don't mind paying for something decent as well. Thanks for any advice.

    Read the article

  • In search of database delivery practitioners and enthusiasts

    - by Claire Brooking
    We know from speaking with many of you at tradeshows and user groups that database delivery is not a factory production line. During planning, evaluation, quality control, and disaster mitigation, the people having their say at each step means that successful database deployment is a carefully managed course of action. With so many factors involved at every stage, we would love to find a way for our software to help out, by simplifying processes, speeding them up or joining together the people and the steps that make it all happen. We’re hoping our new research group for database delivery (SQL Server and Oracle) will help us understand the views and experiences of those of you out there in the trenches managing database changes. As part of our new group, we’ll be running a variety of research sessions, including surveys and phone interviews, over coming months. If you have opinions to share on Continuous Integration or Continuous Delivery for databases, we’d love to hear from you. Your feedback really will count as the product teams at Red Gate build plans. For some of our more in-depth sessions, we’ll also be offering participants an Amazon voucher as a thank-you for your time. If you’re not yet practising automated database deployment processes, but are contemplating or planning it, please do consider joining our research group too. If you’d like to sign up to the group and find out more, please fill in a quick form online, and we’ll be in touch to let you know about new research opportunities you might be interested in. We look forward to hearing your stories!

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • Today @ OOW: Identity Management for the SoMoClo world

    - by B Shashikumar
    Today at OpenWord, we have a very interesting lineup of Identity Management sessions that discuss how to extend identity management securrley to cloud, mobile and social ecosystems. Here are 3 of the can’t miss identity management sessions today: Identity Management and the Cloud: Security is regularly identified as the #1 barrier to cloud service adoption. Oracle Identity Management is designed to help customers extend and connect core identity services to SaaS applications and systems. This session explores how organizations are using Oracle Identity Management with cloud services and how some customers are offering identity management as a cloud service. Real-time External Authorization for Applications, Middleware and Databases: Externalization of authorization is key to manageability and audit. This session covers enterprise wide authorization solution deployment best practices and real-world examples of using Oracle Entitlements Server—the one-stop standards-compliant authorization solution—for middleware, applications, and data. Delivering Secure WiFi on the Tube as an Olympics Legacy from London 2012: In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. Here is the complete lineup of Identity Management sessions today at OOW.

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • New features in SQL Prompt 6.4

    - by Tom Crossman
    We’re pleased to announce a new beta version of SQL Prompt. We’ve been trying out a few new core technologies, and used them to add features and bug fixes suggested by users on the SQL Prompt forum and suggestions forum. You can download the SQL Prompt 6.4 beta here (zip file). Let us know what you think! New features Execute current statement In a query window, you can now execute the SQL statement under your cursor by pressing Shift + F5. For example, if you have a query containing two statements and your cursor is placed on the second statement: When you press Shift + F5, only the second statement is executed:   Insert semicolons You can now use SQL Prompt to automatically insert missing semicolons after each statement in a query. To insert semicolons, go to the SQL Prompt menu and click Insert Semicolons. Alternatively, hold Ctrl and press B then C. BEGIN…END block highlighting When you place your cursor over a BEGIN or END keyword, SQL Prompt now automatically highlights the matching keyword: Rename variables and aliases You can now use SQL Prompt to rename all occurrences of a variable or alias in a query. To rename a variable or alias, place your cursor over an instance of the variable or alias you want to rename and press F2: Improved loading dialog box The database loading dialog box now shows actual progress, and you can cancel loading databases:   Single suggestion improvement SQL Prompt no longer suggests keywords if the keyword has been typed and no other suggestions exist. Performance improvement SQL Prompt now has less impact on Management Studio start up time. What do you think? We want to hear your feedback about the beta. If you have any suggestions, or bugs to report, tell us on the SQL Prompt forum or our suggestions forum.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • Discovering Your Project

    - by Tim Murphy
    The discovery phase of any project is both exciting and critical to the project’s success.  There are several key points that you need to keep in mind as you navigate this process. The first thing you need to understand is who the players in the project are and what their motivations are for the project.  Leaving out a key stakeholder in the resulting product is one of the easiest ways to doom your project to fail.  The better the quality of the input you have at this early phase the better chance you will have of creating a well accepted deliverable. The next task you should tackle is to gather the goals for the project.  Specifically, what does the company expect to get for the money they are about to layout.  This seems like a common sense task, but you would be surprised how many teams to straight to building the system.  Even if you are following an agile methodology I believe that this is critical. Inventorying the resources that already exists gives you an idea what you are going to have to build and what you can leverage at lower risk.  This list should include documentation, servers, code repositories, databases, languages, security systems and supporting teams.  All of these are “resources” that can effect the cost and delivery schedule of your project. Finally, you need to verify what you have found and documented with the stakeholders and subject matter experts.  Documentation that has not been reviewed is actually a list of assumptions and we all know that assumptions are the mother of all screw ups. If you give the discovery phase of your project the attention that it deserves your project has a much better chance of success. I would love to hear what other people find important for this phase.  Please leave comments on this post so we can share the knowledge. del.icio.us Tags: Project discovery,documentation,business analysis,architecture

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • starting up with VPS or cloud hosting? [closed]

    - by FlyOn
    Possible Duplicate: How to find web hosting that meets my requirements? Summary: I want to start hosting my product. I'd like to register domains (at some point). I'm a linux beginner. Thinking about scalability and price, I'm thinking am I better off on a VPN to get started or would some form of cloud hosting be better (not being familiar with either). Full question: I'm creating a product where people can create their own 3D representations of whatever data / info they have, and (re)organise that data. The product is coming along beautifully on my local environment, but it's about time I start getting some form of hosting ready, and I could really use some advice where / how to get started: I'd like people to be able to move/register their own domains on my server. I could start without this just to demo the product, but it would be the very first on the todo list. I'd like to automatically copy some files / install databases etc for each domain. I probably want to see if I can let users manage their own subdomains at some points, but for now: I'd like start as simple as possible. I've always on a windows machine, so my linux experience is quite basic. I really don't mind getting into it, but I'm thinking it's better to get my product out first of all and see where to go from there. Although... I'd like things to be scalable. If I set up some reseller VPN now which only scales to 100 domains or so, which means I have to set up something else / move again when I pass that level, or which means that I'm in trouble if I suddenly get lots of new customers... hmm. Finally, I need to start cheap. I'm putting all I have into starting this company, and live on very little. So before I have any customers, 50 dollars a month is a fair bit and 100 dollars a month may be too much. If anyone has some tips to help get me started I'd be really grateful.

    Read the article

  • We are moving an Access based corporate front-end into a Web-based App

    - by Max Vernon
    We have an enterprise application with a front end written in Microsoft Access 2003 that has evolved over the past 6 years. The back end data, and a fair amount of back-end logic is contained within several Microsoft SQL Server databases. This front end app consists of around 180 forms, and over 120,000 lines of code, and interacts with VB.Net DLLs that support various critical functions used by our sales force. The current system makes use of 3 monitors to display various information; the Access app uses COM+ to control Microsoft Outlook and Internet Explorer for various purposes. The Access front end sometimes occupies 2 screens, automatically resizing itself based on Windows API-reported screen dimensions. The app also uses a Google map to present data to our agents, and allows two-way interactivity with the map through COM+ connectivity to JavaScript contained in the Google map. At the urging of senior management, we are looking to completely rewrite this application using some web-based technology, such as ASP.Net or perhaps a LAMP stack (the thinking with the LAMP stack thing is "free" is pretty cheap). We want to move to a web-based app so we can eliminate the dependency on our physical location for hiring new sales force members. Currently, our main office is full to capacity, and we need to continue growing the company. Does anyone have any thoughts on what would be the best technology to use for a web-based app of this magnitude? Keeping in mind the app is dependent on back-end services on our existing infrastructure. The app handles financial data and personal customer data, among other things. [I've looked at Best practices for moving large MS Access application towards .Net? and read the answers, and most of the comments. Interesting reading, and has some valid points, but our C.O.O. and contracted Software Architect are pushing for a full web-based app, not a .Net Windows App]

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >