Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 181/3660 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • How do we read the data from a public google spreadsheet using Zend_Gdata

    - by bobo
    I have read the examples here: http://framework.zend.com/manual/en/zend.gdata.spreadsheets.html But those examples assume the spreadsheet to be read needs authentication: $service = Zend_Gdata_Spreadsheets::AUTH_SERVICE_NAME; $client = Zend_Gdata_ClientLogin::getHttpClient($user, $pass, $service); $spreadsheetService = new Zend_Gdata_Spreadsheets($client); $feed = $spreadsheetService->getSpreadsheetFeed(); The spreadsheet I am going to read from is a public one, so I don't really need to provide any authentication right? And what I need to provide is just the url of the spreadsheet. I tried to read the class description here but still have no idea how it can be done: http://framework.zend.com/apidoc/core/Zend_Gdata/Spreadsheets/Zend_Gdata_Spreadsheets.html

    Read the article

  • As-You-Type-Searching with Core Data / NSFetchedResultsController

    - by Snej
    I implemented an as-you-type-searching (text search on single attribute) by fetching with performFetch: after each given character by the user. The performFetch: is running in a background thread to avoid keyboard freezes. But while typing many useless fetches are started. A NSOperationQueue might be an option, but I wonder if there are other approaches for this quite usual search behavior. What's best practice to notice when fetching is done and the table view is updated with the previous fetch to start a new fetch?

    Read the article

  • Quickly create large file on a windows system?

    - by Leigh Riffel
    In the same vein as http://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system I'd like to quickly create a large file on a windows system. By large I'm thinking 5GB. The content doesn't matter. A built in command or short batch file would be preferable, but I'll accept an application if there are no other easy ways.

    Read the article

  • Can't find my bug: Group flat data in Advanced Datagrid w'ont work

    - by Werner
    Hi, i've got an ArrayCollection which is properly displayed in this Advanced Datagrid: <mx:AdvancedDataGrid id="drawingDataDG" editable="true" sortableColumns="true" headerWordWrap="true" sortExpertMode="true" rowCount="8" y="10" right="10" left="10" dataProvider="{model.drawingsData}"> <mx:columns> <mx:AdvancedDataGridColumn headerText="Approved in Week" dataField="ApprovedInWeek" editable="false" visible="true" /> <mx:AdvancedDataGridColumn headerText="DRAWING_PK" dataField="DRAWING_PK" editable="false" visible="false" /> <mx:AdvancedDataGridColumn headerText="Drawing No" dataField="DRAWING_NO" editable="false" visible="true"/> <mx:AdvancedDataGridColumn headerText="Drawing Index" dataField="DRAWING_INDEX" editable="false" visible="true"/> </mx:columns> ` According to this explanation link text I've implemented a GroupingCollection. But it just don't work??? <mx:AdvancedDataGrid id="drawingDataDG" editable="true" sortableColumns="true" headerWordWrap="true" sortExpertMode="true" rowCount="8" y="10" right="10" left="10" initialize="gc.refresh();"> <mx:dataProvider> <mx:GroupingCollection id="gc" source="{model.drawingsData}"> <mx:Grouping> <mx:GroupingField name="ApprovedInWeek"/> </mx:Grouping> </mx:GroupingCollection> </mx:dataProvider> <mx:columns> <mx:AdvancedDataGridColumn headerText="Approved in Week" dataField="ApprovedInWeek" editable="false" visible="true" /> <mx:AdvancedDataGridColumn headerText="DRAWING_PK" dataField="DRAWING_PK" editable="false" visible="false" /> <mx:AdvancedDataGridColumn headerText="Drawing No" dataField="DRAWING_NO" editable="false" visible="true"/> <mx:AdvancedDataGridColumn headerText="Drawing Index" dataField="DRAWING_INDEX" editable="false" visible="true"/> </mx:columns> </mx:AdvancedDataGrid> Please let me know what additional details you may need? Werner

    Read the article

  • Avoiding Duplicate Data in DB (for use with Rails)

    - by ants
    I have five tables that I am trying to get to work nicely together but may need some help. I have three main tables: accounts members and roles. With two join tables account_members and account_member_roles. The accounts and members table are joined by account_members (fk account_id and member_id) table. The other 2 tables are the problem (roles and account_member_roles). A member of an account can have more than one role and I have the account_member_roles (fk account_member_id and role_id) table joining the account_members join table and the roles table. That seems logical but can you have a relationship with a join table? What I'd like to be able to do is when creaeting an account, for instance, I would like @account.save to include the roles and update the account_member_roles table neatly ..... but through the account_members join table. I've tried ..... accept_nested_attributes_for :members, :account_member_roles in the account.rb but I get ..... ActiveRecord::HasManyThroughCantAssociateThroughHasManyReflection (Cannot modify association 'Account#account_member_roles' because the source reflection class 'AccountMemberRole' is associated to 'AccountMember' via :has_many.) upon trying to save a record. Any advice on how I should approach this? CIA -ants

    Read the article

  • Recovering the deleted data from Android SD Card ?

    - by Nish
    I am trying to make an Android application which would try to recover deleted content from the SD Card. How feasible it is ? I have following method in mind: 1) Since, the files are not actually deleted, can I access the file system to see files which has been marked to be overwritten. 2) Or will I have do header/footer file carving ? Is it possible from the application layer of android ? I am concerned about files stored on contiguous sectors and not the fragmented once. I want to do a basic file retrieval. Please let me know some resources which I can use to make this application ?

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

    - by ccomet
    I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary. I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Delibrately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions. Thank you in advance!

    Read the article

  • oData/ADO.NET Data Services using LINQ-to-SQL with a decryption layer

    - by Program.X
    I have written an application using LINQ-to-SQL that submits a web form into a database. I absact the LINQ-to-SQL away using a Repository pattern. This repository has the basic methods: Get(), Save(), etc. As a development of the project, I needed to encrypt certain fields in the form. This was trivial, as I just added the encryption calls to the Get(), Save() methods in the Repository. Now, I want to put an oData layer over it, to allow RESTful extraction from MS Excel 2010 (when it comes out). I have this working, after a few stumbles on useless error messages, etc. However, obviously, those encrypted fields are still encrypted. My repository pattern would have decrypted these for me. As far as I know, I have to directly bind my oData service to the LINQ-to-SQL context for the schema, etc. to work - unless I enter a whole world of pain (any URLs appreciated). Is there a way I can insert my encryption/decryption layer into the request so decryption is done "on the fly"? I looked at the OnStartProcessingRequest() overload of DataService but this doesn't seem that useful.

    Read the article

  • Core data, sorting one-to-many child objects

    - by Shizam
    So if I have a store of parents children and the parent has a one to many relationship to children (parent.children) and they all have first names. Now, on the initial fetch for parents I can specify a sort descriptor to get them back in order of first name but how can I request the children in order? If I do a [parent.children allObjects] it just gives them back in a jumble and I'd have to sort after the fact, every time. Thanks, Sam

    Read the article

  • How do you make a Custom Data Generator for SQL XML DataType.

    - by Keith Sirmons
    Howdy, I am using Visual Studio 2010 and am playing around with the Database Projects. I am creating a DataGenerationPlan to insert data into a simple table, in which one of the column datatypes is XML. Out of the box, the generation plan uses the Regular Expression generator and generates something like this : HGcSv9wa7yM44T9x5oFT4pmBkEmv62lJ7OyAmCnL6yqXC2X.......... I am looking at creating a custom data Generator for this data type and have followed this site for the basics: http://msdn.microsoft.com/en-us/library/aa833244.aspx This example works if I am creating a string datatype and using it for a nvarchar datatype. What do I need to change to hook this Generator to the XML Datatype? Below are my code files. The string property works for nvarchar. The XElement property does not work for the xml datatype, and the RecordXMLDataGenerator is not listed as an option in the Generator column for the generation plan. CustomDataGenerators: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Data.Schema.Tools.DataGenerator; using Microsoft.Data.Schema.Extensibility; using Microsoft.Data.Schema; using Microsoft.Data.Schema.Sql; using System.Xml.Linq; namespace CustomDataGenerators { [DatabaseSchemaProviderCompatibility(typeof(SqlDatabaseSchemaProvider))] public class RecordXMLDataGenerator : Generator { private XElement _RecordData; [Output(Description = "Generates string of XML Data for the Record.", Name = "RecordDataString")] public string RecordDataString { get { return _RecordData.ToString(SaveOptions.None); } } [Output(Description = "Generates XML Data for the Record.", Name = "RecordData")] public XElement RecordData { get { return _RecordData; } } protected override void OnGenerateNextValues() { base.OnGenerateNextValues(); XElement element = new XElement("Root", new XElement("Children1", 1), new XElement("Children6", 6) ); _RecordData = element; } } } XML Extensions File: <?xml version="1.0" encoding="utf-8" ?> <extensions assembly="" version="1" xmlns="urn:Microsoft.Data.Schema.Extensions" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:Microsoft.Data.Schema.Extensions Microsoft.Data.Schema.Extensions.xsd"> <extension type="CustomDataGenerators.RecordXMLDataGenerator" assembly="CustomDataGenerators, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxx" enabled="true"/> </extensions> Table.sql: CREATE TABLE [dbo].[Record] ( id int IDENTITY (1,1) NOT NULL, recordData xml NULL, userId int NULL, test nvarchar(max) NULL, rowver rowversion NULL, CONSTRAINT pk_RecordID PRIMARY KEY (id) )

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • DataGridView bound to a Dictionary and updated with a thread

    - by Manjoor
    I have a Dictionary binded to DataGridView by using following sample code. http://stackoverflow.com/questions/854953/datagridview-bound-to-a-dictionary Please see the above question first The diffrence is that i am updating dictionary from a thread. (Event handler of another class). My Event handler is as below static void f_PriceChanged(Objet f, eventData e) { if (prices.ContainsKey(e.ItemText)) prices[e.ItemText] = e.price; else prices.Add(e.ItemText, e.price); } Not to mention the prices is declared as class level. I have modified the button code from original post as Button btn = new Button(); btn.Dock = DockStyle.Bottom; btn.Click += delegate { bl.Reset(); }; form.Controls.Add(btn); Internally the Dictionary is updated as expected but grid does not update. Clicking on button generate exception Collection was modified; enumeration operation may not execute What to do?

    Read the article

  • Adding file of the same name into the same list and unable to update name of the SPFile

    - by BeraCim
    Hi all: I'm having difficulties adding file of the same name in the same list and subsequently updating/changing/modifying the name of a SPFile. Basically, this is what I'm trying to do: string fileName = "something"; // obtained from a loop -- loop omitted here. SPFile file = folder.Files.Add(fileName, otherFile.OpenBinary()); The Add method will generate a runtime exception when it finds another file of the same name in the same list/folder. So I thought of changing the file name later in the process: string newGuid = Guid.NewGuid().ToString(); SPFile file = folder.Files.Add(newGuid, otherFile.OpenBinary()); // some other processing... afterwards, rename the file file.name = fileName; file.Item.Update(); A few minute of googling indicated I need to either move the file around, or update the name field by using ["Name"] instead. I was wondering are there any other better ways to get around this problem? Thanks.

    Read the article

  • Cocoa Core Data - Efficient Related Entities Counts

    - by Gary
    I am working on my first iPhone application and I've hit a wall. I'm trying to develop a 'statistics' page for a three entity relationship. My entities are the following: Department - Name, Address, Building, etc. People - Name, Gender (BOOL), Phone, etc If I have fetched a specific department how do I filter those results and only return people that are Male (Gender == 0)? If I do NSLog(@"%d", [department.people count]); I get the correct number of people in that department so I know I'm in the neighborhood. I know I could re-fetch and modify the predicate each time but with 20+ stats in my app that seems inefficient. Thanks for any advice!

    Read the article

  • What ORM for .NET should I use?

    - by eKek0
    I'm relatively new to .NET and have being using Linq2Sql for a almost a year, but it lacks some of the features I'm looking for now. I'm going to start a new project in which I want to use an ORM with the following characteristics: It has to be very productive, I don't want to be dealing with the access layer to save or retrieve objects from or to the database, but it should allows me to easily tweak any object before actually commit it to the database; also it should allows me to work easily with a changing database schema It should allows me to extend the objects mapped from the database, for example to add virtual attributes to them (virtual columns to a table) It has to be (at least almost) database agnostic, it should allows me to work with different databases in a transparent way It has to have not so much configuration or must be based on conventions to make it work It should allows me to work with Linq So, do you know any ORM that I could use? Thank you for your help. EDIT I know that an option is to use NHibernate. This appears as the facto standard for enterprise level applications, but also it seems that is not very productive because its deep learning curve. In other way, I have read in some other post here in SO that it doesn't integrate well with Linq. Is all of that true?

    Read the article

  • Using DTOs and BOs

    - by ryanzec
    One area of question for me about DTOs/BOs is about when to pass/return the DTOs and when to pass/return the BOs. My gut reaction tells me to always map NHibernate to the DTOs, not BOs, and always pass/return the DTOs. Then whenever I needed to perform business logic, I would convert my DTO into a BO. The way I would do this is that my BO would have a have a constructor that takes a parameter that is the type of my interface (that defines the required fields/properties) that both my DTO and BO implement as the only argument. Then I would be able to create my BO by passing it the DTO in the constructor (since both with implement the same interface, they both with have the same properties) and then be able to perform my business logic with that BO. I would then also have a way to convert a BO to a DTO. However, I have also seen where people seem to only work with BOs and only work with DTOs in the background where to the user, it looks like there are no DTOs. What benefits/downfalls are there with this architecture vs always using BO's? Should I always being passing/returning either DTOs or BOs or mix and match (seems like mixing and matching could get confusing)?

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • How to display changing data in XUL

    - by pavithraCS
    Hi, I'm trying to build a firefox extension which can get rss feeds and display them in a popup panel. But I'm not aware about how to display feeds in a panel(I know how to display static text).Because the feed is varying all the time. Any help regarding this matter will be appreciated. Thanks in advance.

    Read the article

  • Passing Binary Data to a Stored Procedure in SQL Server 2008

    - by Joe Majewski
    I'm trying to figure out a way to store files in a database. I know it's recommended to store files on the file system rather than the database, but the job I'm working on would highly prefer using the database to store these images (files). There are also some constraints. I'm not an admin user, and I have to make stored procedures to execute all the commands. This hasn't been of much difficulty so far, but I cannot for the life of me establish a way to store a file (image) in the database. When I try to use the BULK command, I get an error saying "You do not have permission to use the bulk load statement." The bulk utility seemed like the easy way to upload files to the database, but without permissions I have to figure a work-a-round. I decided to use an HTML form with a file upload input type and handle it with PHP. The PHP calls the stored procedure and passes in the contents of the file. The problem is that now it's saying that the max length of a parameter can only be 128 characters. Now I'm completely stuck. I don't have permissions to use the bulk command and it appears that the max length of a parameter that I can pass to the SP is 128 characters. I expected to run into problems because binary characters and ascii characters don't mix well together, but I'm at a dead end... Thanks

    Read the article

  • Core Data => Adding a related object always nil

    - by mongeta
    Hello, I have two tables related: DataEntered and Model DataEntered -currentModel Model One DataEntered can have only ONE Model, but a Model can stay into many DataEntered. The relationship is from DataEntered to Model (No To Many-relathionship) and no inverse relation. XCode generates the setters for DataEnteredModel: @property (nonatomic, retain) NSSet * current_model; - (void)addCurrent_modelObject:(CarModel *)value; - (void)addCurrent_model:(NSSet *)value; I have a Table and when I select a model, I want to store it to DataEntered: Model *model = [fetchedResultsController objectAtIndexPath:indexPath]; NSLog(@"Model %@",model.name); // ==> gives me the correct model name [dataEntered addCurrent_modelObject:model]; // ==> always nil [dataEntered setCurrent_model:[fetchedResultsController objectAtIndexPath:indexPath]]; // the same, always nil what I'm doing wrong ????? thanks, r.

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >