Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 691/1338 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • Lots of TIME_WAIT connections in netstat (Windows Server 2008)

    - by Rhys Causey
    I'm having some issues on a Windows 2008 server with some network connections not going through. For instance, in a web application on the server, we need to open a socket connection to another server, and this fails sometimes with the following message: Only one usage of each socket address (protocol/network address/port) is normally permitted I looked up the error, which led me to this page: http://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx, which indicates that it might be TCP/IP port exhaustion. When I perform netstat -n, I get tons of TIME_WAIT connections on port 80. Does anyone have any idea what could be causing this?

    Read the article

  • RRAS VPN Server on Windows 2008 Behind NAT

    - by Chris
    Ok, so I have kind of a funky setup, let me see if I can describe it. I have a single VMware host with a public IP address 74.xx.xx.x Inside that host, I have 3 VM's Web Server - 1 NIC - 192.168.199.20 SQL Server - 1 NIC - 192.168.199.30 RRAS/VPN Server - 2 NICs 192.168.199.40 & 192.168.199.45 Due to Limitations of my ISP, all of the VM's are connected to the host VIA NAT. I have NAT setup for the webserver so all incoming requests on 74.xx.xx.x via port 80 route to 192.168.199.20. This works fine. Now I want to set up a Windows 2008 VPN server inside this NAT network and forward the correct traffic to it. My questions are as follows? What are the TCP/UDP ports that i have to forward? What special configuration is needed on the server and clients since this is behind a NAT Any other advice would be wonderful.

    Read the article

  • Vision, Integration, Ability—Oracle is once again positioned as an E-Commerce Leader

    - by Jeri Kelley
    The new Gartner report is the fifth successive Magic Quadrant for E-Commerce to position Oracle as a leader. We’re proud of the result, but we’re not too surprised. Oracle Commerce’s functionality is uniquely aligned with a number of the major market trends Gartner describes in its report: from customers ‘expecting a seamless buying experience across all channels’, to organizations seeking to consolidate ‘B2B and B2C applications with a single underlying platform’. What we think sets Oracle Commerce apart Why are we a leader? We believe the key strengths of Oracle Commerce include: Outstanding Scalability and VersatilityOracle has a long and enviable track record of delivering B2B and B2C e-commerce solutions, and the Oracle Commerce solution supports a broad range of vertical industries – from retail to telecom, and manufacturing to distribution. Additionally, Oracle Commerce is engineered to scale simply and quickly to meet the changing needs of the enterprise. Oracle IntegrationOur commitment to seamless solutions integration allows customers to get the most from our ever evolving range of e-commerce and CX products—and deliver consistent, relevant, and personalized cross-channel buying experiences that drive customer satisfaction, and boost revenue. Experience and VisionOracle has a long and impressive history of delivering B2B and B2C e-commerce solutions to the world’s best brands. We’re constantly putting this experience to good use, and making our solutions even smarter. With powerful merchandising and business tools, and advanced promotions capabilities, Oracle Commerce is one of the most forward-thinking e-commerce solutions around. Read the reportYou can read Gartner’s full report here, or click here to find out more about our celebrated platform.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • New User of UPK?

    - by [email protected]
    The UPK Developer comes with a variety of manuals to help support your organization in the development and deployment of content. The Developer manuals can be found in the \Documentation\Language Code\Reference folder where the Developer has been installed. As of 3.5.x the documentation can also be accessed via the Start menu, Start\Programs\User Productivity Kit\Documentation\Reference. Content Deployment.pdf: This manual provides information on how to deploy content to your audience. Content Development.pdf: This manual provides information on how to create, maintain, and publish content using the Developer. The content of this manual also appears in the Developer help system. Content Player.pdf: This manual provides instructions on how to view content using the Player. The content of this manual also appears in the Player help system. In-Application Support Guide.pdf: This manual provides information on how implement content-sensitive, in-application support for enterprise applications using Player content. Installation & Administration.pdf: This manual provides instructions for installing the Developer in a single-user or multi-user environment as well as information on how to add and manage users and content in a multi-user installation. An Administration help system also appears in the Developer for authors configured as administrators. This manual also provides instructions for installing and configuring Usage Tracking. Upgrade.pdf: This manual provides information on how to upgrade from a previous version to the current version. Usage Tracking Administration & Reporting.pdf: This manual provides instructions on how to manage users and usage tracking reports. - Kathryn Lustenberger, Oracle UPK Outbound Product Management

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Exchange 2003 Outlook Anywhere - Changed certificate, not working

    - by JohnyD
    I have a single Exchange 2003 installation which for the past 2 years has been set up for Outlook Anywhere access by means of a self-signed certificate. Just this past week I updated that certificate to a Go Daddy wildcard certificate to allow for use of our web services over https. I've updated the web listener on our ISA 2006 firewall and I can successfully use our services over https. However, my Outlook Anywhere access is now not functioning. I've installed the new wildcard certificate on my XP notebook into the Trusted Root Certificate Store but I keep getting prompted that the password is incorrect. To make things even more confusing I also have OWA set up and this works fine with the new certificate. Any ideas as to what I'm doing wrong?

    Read the article

  • Using PAM and vsftpd without root access

    - by Zizzencs
    I'm trying to set up a few vsftpd instances on a machine that I have no root access to. The authentication should be done through PAM with pam_listfile, like this: pam_listfile.so item=group sense=allow file=/path/filename onerr=fail I can ask the administrator to set up a PAM service for me and include that line but he is not willing to create 6 PAM services for my 6 vsftpd instances and I really need different /path/filename set for each vsftpd server. Is there a way to solve this problem? Can I somehow not use absolute path as the parameter? (I know the correct solution would be to use one vsftpd instance and set up virtual users properly. However unfortunately I have to work what I have and the users already exist in an Active Directory and are authenticated to the system using another PAM service.)

    Read the article

  • Execute Backup-SqlDatabase cmdlet remotely

    - by Maxim V. Pavlov
    When I run the following script line locally on an SQL Server machine, it executes perfectly: Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" $serverName contains a short name of the SQL Server instance. SQL Server is 2012, so these new cmdlets work like a charm. On the other hand, when I am trying to perform a DB backup from a TeamCity agent machine like this (Through Invoke-Command cmdlet): function BackupDB([String] $serverName, [String] $sqldbname, [String] $backupFolder, [String] $addinionToName) { Import-Module SQLPS -DisableNameChecking Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" } Invoke-Command -computername $SQLComputerName -Credential $credentials -ScriptBlock ${function:BackupDB} -ArgumentList $SQLInstanceName, $DatabaseName, $BackupDirectory, $BakId results in an error: Failed to connect to server $serverName. + CategoryInfo : NotSpecified: (:) [Backup-SqlDatabase], ConnectionFailureException + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.M anagement.PowerShell.BackupSqlDatabaseCommand What is the correct way to execute Backup-SqlDatabase cmdlet remotely?

    Read the article

  • Help on TileMapRenderer

    - by Crypted
    In my project, I'm trying to render a map using TileMapRenderer. But it doesn't show anything when I render it. But when I use some other files from a tutorial they are rendered correctly. When debugging my TileAtlas instance shows the size as 0. I have used Texture Packer UI for packing the images. Comparing with the tutorial's files, I can see that the index starts from 1 in my file and 0 in the tutorial. But changing it to 0 wouldn't work also. map.png format: RGBA8888 filter: Nearest,Nearest repeat: none Map rotate: false xy: 0, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 1 Map rotate: false xy: 32, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 2 Map rotate: false xy: 64, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 3 Map rotate: false xy: 96, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 4 Map rotate: false xy: 128, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 5 Here is the begining of the tmx file. <?xml version="1.0" encoding="UTF-8"?> <map version="1.0" orientation="orthogonal" width="20" height="20" tilewidth="32" tileheight="32"> <tileset firstgid="1" name="a" tilewidth="32" tileheight="32"> <image source="map.png" width="256" height="32"/> </tileset> <layer name="Tile Layer 1" width="20" height="20"> <data> <tile gid="2"/> <tile gid="2"/> Apart from that the tutorial files and my files seems to be similar. Can anyone help me here.

    Read the article

  • GlassFish will not start when SNMP is enabled

    - by edarc
    I have a GlassFish v3 app server running on 64-bit Debian Lenny. Everything is running fine, except I would like to monitor GF's JVM instance with SNMP. However, every time I try to enable it by adding the following <jvm-options> in domain.xml: -Dcom.sun.management.snmp.port=10161 -Dcom.sun.management.snmp.acl.file=/path/to/snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1 GlassFish refuses to start: $ asadmin start-domain Waiting for DAS to start .Error starting domain: default. The server exited prematurely with exit code 1. Command start-domain failed. $ There is also nothing illuminating (well, really nothing at all) in jvm.log or server.log. The snmp.acl file contains: acl = { { communities = public access = read-only managers = localhost } } and is chmod 600 (I know this is not the problem because it will actually fail with an error about the permissions if it is set to anything other than 600) $ java -version java version "1.6.0_0" OpenJDK Runtime Environment (build 1.6.0_0-b11) OpenJDK 64-Bit Server VM (build 1.6.0_0-b11, mixed mode)

    Read the article

  • Problem in installing OpenOffice 3.1 on Solaris 10

    - by Sunil Kumar Sahoo
    I want OpenOffice in Solaris. So I downloaded OpenOffice from the link below. http://download.openoffice.org/other.html#tested-full My OpenOffice is in .tar.gz format so I unzipped the file using gunzip and then untar'ed the file using tar xvf command. Now I got a directory containing packages subfolder. When I cd to that directory I found too many subdirectories. I could not find a single .pkg file or .jar file or .sh file so that I can install the OpenOffice in Solaris 10. How can I install OpenOffice in Solaris 10 given the scenario above?

    Read the article

  • SCSI direct-access device appears as multiple lun's

    - by unixdj
    I have a similarly described problem to this question: http://superuser.com/questions/90181/same-scsi-drive-appears-multiple-times-on-the-controller-list where a SCSI direct-access device appears as multiple lun's, when it should only be one. The device is a SCSI-1 device, the SCSI controller card is an Adaptec AHA-7850 (rev 03), and system is PC / Linux 2.6. This device worked fine with RHEL4, and appeared as a single device / lun when the OS booted, but I've just tried plugging the device into a newer Linux disto (CentOS 5.4) and it now sees the device as 8 luns; with consequently 8 device files /dev/sgb to /dev/sgi. Any clues of how to figure out where the problem / fix is, would be great.

    Read the article

  • Linked vSphere servers preventing cloning?

    - by brian
    I've currently got a pair of vSphere5 standard servers (physical, not VAs) managing about a hundred ESX 4.1 and 5 hosts in two different physical and logical datacenters. With our last purchase, we bought another vSphere license for the new vS server. I unmanaged all the ESX servers in one datacenter and added them to new vSphere server. Our previous single-vS-server layout used to be: -vSphere1 --Datacenter1 (where the physical ESX host was located) ---Folder ----ESX server1 --Datacenter2 ---Folder ----ESX server2 Now it looks like -vSphere1 --Datacenter1 ---Folder ----ESX server1 -vSphere2 (new vSphere server) --Datacenter2 ---Folder ----ESX server2 ESX server2 was removed from vSphere1's inventory and added to vSphere2's, so it is now managed by vSphere2. This is nice and all, as no vSphere <-- ESX management traffic leaves the physical datacenter, except for one huge oversight: when I go to clone a VM, the opposite vSphere server (and thus other datacenter) does not show up in the list on the first page of the wizard. Is this a bug, a license limitation, or is it just simply not possible to clone a VM from an ESX box managed by one vS server to another ESX box managed by a /different/ vS server?

    Read the article

  • Encode real-time dvb-s stream using mencoder

    - by karatchov
    My satellite receiver can stream the mpeg-2 video/audio output through lan. Using mencoder, I'm trying to build a script to encode and save the stream in real time with my Core2Duo 1.8 Ghz. Right now, I'm using a single pass, it produces good quality for a video rate of 800Kb/s, but takes more then 95% of CPU power, thus making a lot of frameskips is the computer is used while encoding. mencoder -o -vf lavcdeint -oac mp3lame -lameopts abr:q=2:aq=2 -ovc x264 -ffourcc avc1 -x264encopts crf=25:me=hex:subq=9:frameref=2:nocabac:threads=auto -mc 3 So, I'm considering using a 2-pass encoding to alleviate the processor and record 100% of the stream. But I have no idea how to start. For the info: Standard Stream: mpeg-2 720*576 25fps HD Stream: 1920*1080 50fps (this is not my goal to record it, but it will be super cool if I could)

    Read the article

  • How to enable "Sleep" for Windows 2012 server

    - by Ramesh
    I recently installed Windows Server 2012. This will serve as the dev-instance for our engineers and will be accessed in multiple time zones. I therefore plan to run this 24x7 but want to conserve energy by going to sleep mode when not in use & enable Wake-On-Magic-Packet. Based on the previous post it appears that there is no option to sleep Win Server 2012 with Hyper-V. Since I don't care much about virtualization now, I uninstalled Hyper-V. In addition to this, I have done the following 1) Ran powercfg -a. Don't find sleep. 2) Added [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\hvboot] "Start"=dword:00000003 to try gaining sleep - no luck! 3) Only see Shutdown and Restart in power options. Please help me sleep and save the planet :-) Thanks, Ramesh

    Read the article

  • Variable parsing with Bash and wget

    - by Bill Westrup
    I'm attempting to use wget in a simple bash script to grab a jpeg image from an Axis camera. This script outputs a file named JPEGOUT, instead of the desired output, which should be a timestamp jpeg (ex: 201209292040.jpg) . Changing the variable in the wget statement from JPEGOUT to $JPEGOUT makes wget fail with "wget: missing URL" error. The weird thing is wget parses the $IP vairable correctly. No luck on the output file name. I've tried single quotes, double quotes, parenthesis: all to no luck. Here's the script !/bin/bash IP=$1 JPEGOUT= date +%Y%m%d%H%M.jpg wget -O JPEGOUT http://$IP/axis-cgi/jpg/image.cgi?resolution=640x480&compression=25 Any ideas on how to get the output file name to parse correctly?

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Putting a base in the middle

    - by PSteele
    From Eric Lippert's Blog: Here’s a crazy-seeming but honest-to-goodness real customer scenario that got reported to me recently. There are three DLLs involved, Alpha.DLL, Bravo.DLL and Charlie.DLL. The classes in each are: public class Alpha // In Alpha.DLL {   public virtual void M()   {     Console.WriteLine("Alpha");   } } public class Bravo: Alpha // In Bravo.DLL { } public class Charlie : Bravo // In Charlie.DLL {   public override void M()   {     Console.WriteLine("Charlie");     base.M();   } } Perfectly sensible. You call M on an instance of Charlie and it says “Charlie / Alpha”. Now the vendor who supplies Bravo.DLL ships a new version which has this code: public class Bravo: Alpha {   public override void M()   {     Console.WriteLine("Bravo");     base.M();   } } The question is: what happens if you call Charlie.M without recompiling Charlie.DLL, but you are loading the new version of Bravo.DLL? The customer was quite surprised that the output is still “Charlie / Alpha”, not “Charlie / Bravo / Alpha”. Read the full post for a very interesting discussion of the design of C#, the CLR, method resolution and more. Technorati Tags: .NET,C#,CLR

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • What's required to configure Ubuntu to use a specific DNS server?

    - by ks78
    I've setup two Amazon EC2 instances, both running Ubuntu Server. One is configured as a DNS server running bind9, which will be used to allow EC2 instances to communicate with each other based on hostname rather than IP, since their private IPs may change. I think I have the DNS server setup correctly. I want to use the second EC2 instance to test the DNS server. Using Webmin, I've added the DNS server's private IP to the client's DNS Servers list and added the domain to the Search Domains list. I did have to edit /etc/dhcp3/dhclint.conf to make my changes stick. After reboot, I expected I'd be able to ping or nslookup the DNS server from the test client, but it can't seem to find the server. Is there something I'm missing? What's required to configure an Ubuntu client to use a DNS server? I just want to make sure I'm not missing something before I assume the server's the problem.

    Read the article

  • Stop Zabbix notification for nodes under zabbix-proxy when proxy service is down

    - by A_01
    I have a zabbix-proxy and 12 nodes in that proxy. Right now whenever proxy service goes down. It send out of reach mail for all the 12 nodes. I want to send mail only for the zabbix proxy not for the nodes under that proxy Updated: Now I am trying to have a single trigger in which I want to check both the conditions like 1-check zabbix-host is not accessble from past x minutes. 2-check the host is not giving any data to the proxy(Host is down). Not the trigger should start shouting onle when we have condition in which proxy is running and node is down. I tried the below but its not working for me. Can some please help me out in this ({ip-10-4-1-17.ec2.internal:agent.ping.nodata(2m)}=1) & ({ip-10-4-1- 17.ec2.internal:zabbix[proxy,zabbixproxy.dev-test.com,lastaccess].fu??zzytime(120)}=1)

    Read the article

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >