Search Results

Search found 88355 results on 3535 pages for 'symbol server'.

Page 255/3535 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Windows HPC Server links

    I've already described how to setup a Windows HPC Server for development. Before you dive into developing for the cluster, if you are new to this it is probably a good idea to learn the basics by reading some overview material. Below is a list of links.Direct Links to Windows HPC content1. Windows HPC Server 2008 Overview Datasheet (4 page pdf).2. Windows HPC Server 2008 Technical Overview (32 page doc).3. Windows HPC Server 2008 Getting Started Guide (26 page doc) which actually is available online as part of the TechNet technical library section on Windows HPC Server 2008, which includes much more useful data.4. Windows HPC Server 2008 Job Scheduler (38 page doc).5. Windows HPC Server 2008 Job Templates (56 page doc).6. Developing for the Windows HPC Server 2008 Platform (16 page doc or pdf version).Windows HPC sites7. Windows HPC Forums.8. HPC Developer Resources.9. Windows HPC Server 2008 Resource Kit - Developer.10. Windows HPC Server 2008 - TechNet.11. The Windows HPC Team Blog.HPC Course12. High-Performance Computing Fundamentals Course (pluralisight)13. Classic HPC Development using Visual C++ (course slides and materials in a ZIP). Author's blog post.14. From sequential to parallel code (course slides and materials in a ZIP). Author's blog post. Next time I will post resources specific to the most popular programming models for the cluster today: MPI and Cluster SOA - until then, happy reading! Comments about this post welcome at the original blog.

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • Insufficient Permissions on UNC Path for Physical Path in IIS7

    - by Eric C
    I've got a multi-server setup where Server A is hosting the html files and Server B is running IIS 7.5. I've specified a UNC path for the Physical Path of the website on Server B. When I try to hit localhost I'm receiving the following error: Cannot read configuration file due to insufficient permissions I am able to browse and modify files in the UNC path on Server B. I'm guessing it has something to do with IIS_IUSRS of Server B not having permissions, but I'm unsure how to add them to the shared directory of Server A.

    Read the article

  • SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification

    - by pinaldave
    Here is the conversion I had right after I had posted my earlier blog post about Download Microsoft SQL Server 2012 RTM Now. Rajesh: So SQL Server is available for me to download? Pinal: Yes, sure check the link here. Rajesh: It is trial do you know when it will be available for everybody? Pinal: I think you mean General Availability (GA) which is on April 1st, 2012. Rajesh: I want to have head start with SQL Server 2012 examination and I want to know every single Exam 70-461: Querying Microsoft SQL Server 2012 This exam is intended for SQL Server database administrators, implementers, system engineers, and developers with two or more years of experience who are seeking to prove their skills and knowledge in writing queries. Exam 70-462: Administering Microsoft SQL Server 2012 Databases This exam is intended for Database Professionals who perform installation, maintenance, and configuration tasks as their primary areas of responsibility. They will often set up database systems and are responsible for making sure those systems operate efficiently. Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 The primary audience for this exam is Extract Transform Load (ETL) and Data Warehouse Developers.  They are most likely to focus on hands-on work creating business intelligence (BI) solutions including data cleansing, ETL, and Data Warehouse implementation. Exam 70-464: Developing Microsoft SQL Server 2012 Databases This exam is intended for database professionals who build and implement databases across an organization while ensuring high levels of data availability. They perform tasks including creating database files, creating data types and tables,  planning, creating, and optimizing indexes, implementing data integrity, implementing views, stored procedures, and functions, and managing transactions and locks. Exam 70-465: Designing Database Solutions for Microsoft SQL Server 2012 This exam is intended for database professionals who design and build database solutions in an organization.  They are responsible for the creation of plans and designs for database structure, storage, objects, and servers. Exam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012 The primary audience for this exam is BI Developers.  They are most likely to focus on hands-on work creating the BI solution including implementing multi-dimensional data models, implementing and maintaining OLAP cubes, and creating information displays used in business decision making Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 The primary audience for this exam is the BI Architect.  BI Architects are responsible for the overall design of the BI infrastructure, including how it relates to other data systems in use. Looking at Rajesh’s passion, I am motivated too! I may want to start attempting the exams in near future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Database Machine and Exadata Storage Server

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Oracle Database Machine and Exadata Storage Server (Doc ID 1187674.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Database Machine environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: • Database Machine and Exadata Storage Server Concepts and Overview• Database Machine and Exadata Storage Server Configuration and Administration• Database Machine and Exadata Storage Server Troubleshooting and Debugging• Database Machine and Exadata Storage Server Best Practices• Database Machine and Exadata Storage Server Patching• Database Machine and Exadata Storage Server Documentation and References• Database Machine and Exadata Storage Server Known Problems• ASM and RAC Documentation• Using My Oracle Support Effectively

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • Windows Server 2003 seems to pick the 'outgoing' IP address at random from all the ones configured in IIS, how can I make it just use one?

    - by Ryan
    We have multiple sites in IIS with different IP addresses. This is cool, want different IPs to all go to this server and use the proper site. However I discovered an issue that when the server makes an outgoing connection, I cannot predict which IP it will use. I had to have one client add ALL the IPs to their firewall so that a certain service could communicate with their server. Well now the time has come to add another IP/site to IIS but I had told them they would not need to add any more IPs. So the question is, how can I make Windows Server 2003 use only ONE specific IP for outgoing calls instead of it being unpredictable? If this is not a good enough description, when I was RDPed into the server and I opened IE and went to 'what is my IP' it was sometimes different which is how I discovered why the one client's firewall was suddenly refusing the connections. How can I just make outgoing calls originate from a static IP yet still allow multiple IPs pointing to different sites in IIS?

    Read the article

  • How can I keep track of SQL Server updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

  • Executing a Batch file from remote Ax client on an AOS server

    - by Anisha
    Is it possible to execute a batch file on an AOS server from a remote AX client? Answer is yes, provided you have necessary permission for this execution on the server. Please create a batch file on your AOS server. Some thing as below for creating a directory on the server.    Insert a command something like this in a .BAT file (batch file) and place any were on the server.   Mkdir “c:\test”      Copy the following code into your server static method of your class and call this piece of code from a button click on Ax form. Please execute this button click from a remote AX client and see the result . This should execute the batch file on the server and should create a directory called ‘test’ on the root directoryof the server.     server static void AOS_batch_file_create() { boolean b; System.Diagnostics.Process process; System.Diagnostics.ProcessStartInfo processStartInfo; ; b = Global::isRunningOnServer(); infolog.add(0, int2str(b)); new InteropPermission(InteropKind::ClrInterop).assert(); process = new System.Diagnostics.Process(); processStartInfo = new System.Diagnostics.ProcessStartInfo(); processStartInfo.set_FileName("C:\\create_dir.bat"); // batch file path on the AOS server process.set_StartInfo(processStartInfo); process.Start(); //process.Refresh(); //process.Close(); //process.WaitForExit(); info("Finished"); }

    Read the article

  • Multiple subnets on isc-dhcp-server using ddns with bind9

    - by legioxi
    On my network I have two subnets: 10.100.1.0/24 - Wired/wireless 10.100.7.0/24 - VPN Both subnets are served by isc-dhcp-server running on a Debian VM. This same VM runs bind9 for my DNS. ISC-DHCP-SERVER is configured to use DDNS and update BIND9 with hosts/IPs. Everything runs great until a device drops off the wired/wireless network and pops onto the VPN. When connecting on the VPN, a DHCP lease is handed out on the new subnet but DDNS does not update BIND9. Since the device has A/TXT/PTR records it appears ISC-DHCP-SERVER won't switch them to the new IP. The logs show: Connect to wireless: Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' A Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' TXT Nov 6 20:55:13 core-server dhcpd: DHCPACK on 10.100.1.160 to FF:FF:FF:FF:FF:FF (demo-iphone) via eth0 Nov 6 20:55:13 core-server dhcpd: Added new forward map from demo-iphone.internal.mydomain.com to 10.100.1.160 Nov 6 20:55:13 core-server dhcpd: Added reverse map from 160.49.21.172.in-addr.arpa. to demo-iphone.internal.mydomain.com Switch to VPN: Nov 6 20:56:34 core-server dhcpd: DHCPOFFER on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 6 20:56:34 core-server dhcpd: DHCPREQUEST for 10.100.7.101 (10.100.1.2) from BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server dhcpd: DHCPACK on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 6 20:56:34 core-server dhcpd: Forward map from demo-iphone.internal.mydomain.com to 10.100.7.101 FAILED: Has an address record but no DHCID, not mine. One thing to note is that the MAC of the device when connecting via VPN is the MAC of my Cisco ASA5512X and not the actual device. The ASA is relaying the DHCP request from the VPN client to the VM running ISC-DHCP-SERVER. Is there a way to get DDNS working in this scenario?

    Read the article

  • Import and Export data from SQL Server 2005 to XL Sheet

    - by SAMIR BHOGAYTA
    For uploading the data from Excel Sheet to SQL Server and viceversa, we need to create a linked server in SQL Server. Expample linked server creation: Before you executing the below command the excel sheet should be created in the specified path and it should contain the name of the columns. EXEC sp_addlinkedserver 'ExcelSource2', 'Jet 4.0', 'Microsoft.Jet.OLEDB.4.0', 'C:\Srinivas\Vdirectory\Testing\Marks.xls', NULL, 'Excel 5.0' Once you executed above query it will crate linked server in SQL Server 2005. The following are the Query from sending the data from Excel sheet to SQL Server 2005. INSERT INTO emp SELECT * from OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\text.xls','SELECT * FROM [sheet1$]') The following query is for sending the data from SQL Server 2005 to Excel Sheet. insert into OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\text.xls;', 'SELECT * FROM [sheet1$]') select * from emp

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Windows 2008 R2 Terminal Server - RemoteApp - Option "Start in" available?

    - by Frank
    we have set up a Windows 2008 R2 Terminal Server. On this server, we want to run an application which needs another path for the option "Start in" than the application itself. In a shortcut, this is no problem (for example, the "Target" is C:\app\bin\application.exe and the "Start in:" is C:\app\users\user1). But I have no option for "Start in" when I configure the RemoteApp so the application fails to start. Is there any possibility? Thank you in advance Frank

    Read the article

  • How does too many requests make a server crash?

    - by eSKay
    I am wondering why websites crash at all. If a server has too many requests, it might queue up the request in its waiting lists and serve it when all the earlier requests have been served. That means that the request for the website will be taken care of, although it may take some more time than expected. Then, how do websites crash due to server overload?

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Unable to sync time using `ntpdate`, error: "no server suitable for synchronization found"

    - by William Ting
    My ntp.conf file: user@pc[0][07:37:40]:/etc$ cat /etc/ntp.conf idriftfile /var/lib/ntp/ntp.drift server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server pool.ntp.org Command output: user@pc[0][07:37:24]:/etc$ sudo ntpdate -dv pool.ntp.org 18 Jun 07:37:35 ntpdate[10737]: ntpdate [email protected] Tue Apr 19 07:15:05 UTC 2011 (1) Looking for host pool.ntp.org and service ntp host found : conquest.kjsl.com transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) 198.137.202.16: Server dropped: no data 216.45.57.38: Server dropped: no data 64.6.144.6: Server dropped: no data server 198.137.202.16, port 123 stratum 0, precision 0, leap 00, trust 000 refid [198.137.202.16], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.1f16c1e3 Sat, Jun 18 2011 7:37:39.121 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 216.45.57.38, port 123 stratum 0, precision 0, leap 00, trust 000 refid [216.45.57.38], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.857c6fbd Sat, Jun 18 2011 7:37:39.521 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 18 Jun 07:37:40 ntpdate[10737]: no server suitable for synchronization found

    Read the article

  • What do you use for a RAM disk on Windows Server?

    - by thelsdj
    We currently use AR Soft RAM Disk on some Windows 2003 servers for storing short lived temporary files. Looking forward to a move to 64-bit Windows Server 2008 I'm wondering what options there are for a RAM disk since it appears AR Soft RAM Disk was discontinued in 2005. I'm not looking for any physical disk backing, just a pure RAM disk that appears like a normal drive to Windows. Does anyone have any experience with RAM disks on Windows Server 2008, especially for 64-bit?

    Read the article

  • Best Practices for "temporary" accounts in a Windows server environment?

    - by Millman
    Are there any best practices for "temporary worker" accounts in a Windows server environment? We have a couple of contractors joining the organization temporarily. They only need access to a few folders. Aside from joining them to the "Domain Guests" group and granting them access only to the folders specified. Are there any other issues to be aware of? We are in a Windows Server 2003 domain environment.

    Read the article

  • How can I keep track of SQL Server cummulative updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >